Sep 10 00:40:15.075269 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Sep 9 23:10:34 -00 2025 Sep 10 00:40:15.075313 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:40:15.075325 kernel: BIOS-provided physical RAM map: Sep 10 00:40:15.075333 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:40:15.075341 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:40:15.075348 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:40:15.075377 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:40:15.075385 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:40:15.075396 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:40:15.075403 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:40:15.075410 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:40:15.075418 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:40:15.075426 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:40:15.075433 kernel: NX (Execute Disable) protection: active Sep 10 00:40:15.075450 kernel: SMBIOS 2.8 present. Sep 10 00:40:15.075458 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:40:15.075466 kernel: Hypervisor detected: KVM Sep 10 00:40:15.075475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:40:15.075486 kernel: kvm-clock: cpu 0, msr 6a19f001, primary cpu clock Sep 10 00:40:15.075494 kernel: kvm-clock: using sched offset of 3250456216 cycles Sep 10 00:40:15.075503 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:40:15.075512 kernel: tsc: Detected 2794.750 MHz processor Sep 10 00:40:15.075521 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:40:15.075534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:40:15.075543 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:40:15.075552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:40:15.075560 kernel: Using GB pages for direct mapping Sep 10 00:40:15.075569 kernel: ACPI: Early table checksum verification disabled Sep 10 00:40:15.075578 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:40:15.075586 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075595 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075604 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075614 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:40:15.075623 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075631 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075640 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075649 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:40:15.075657 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:40:15.075666 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:40:15.075675 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:40:15.075691 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:40:15.075700 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:40:15.075709 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:40:15.075718 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:40:15.075727 kernel: No NUMA configuration found Sep 10 00:40:15.075735 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:40:15.075750 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:40:15.075759 kernel: Zone ranges: Sep 10 00:40:15.075768 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:40:15.075777 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:40:15.075786 kernel: Normal empty Sep 10 00:40:15.075795 kernel: Movable zone start for each node Sep 10 00:40:15.075805 kernel: Early memory node ranges Sep 10 00:40:15.075814 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:40:15.075823 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:40:15.075837 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:40:15.075848 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:40:15.075858 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:40:15.075867 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:40:15.075876 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:40:15.075885 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:40:15.075894 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:40:15.075948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:40:15.075958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:40:15.075967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:40:15.075990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:40:15.076000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:40:15.076009 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:40:15.076018 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:40:15.076027 kernel: TSC deadline timer available Sep 10 00:40:15.076037 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:40:15.076046 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:40:15.076055 kernel: kvm-guest: setup PV sched yield Sep 10 00:40:15.076064 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:40:15.076077 kernel: Booting paravirtualized kernel on KVM Sep 10 00:40:15.076086 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:40:15.076096 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:40:15.076105 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 10 00:40:15.076114 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 10 00:40:15.076123 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:40:15.076132 kernel: kvm-guest: setup async PF for cpu 0 Sep 10 00:40:15.076141 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 10 00:40:15.076150 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:40:15.076161 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:40:15.076170 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:40:15.076178 kernel: Policy zone: DMA32 Sep 10 00:40:15.076189 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:40:15.076199 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:40:15.076208 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:40:15.076217 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:40:15.076226 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:40:15.076241 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 10 00:40:15.076251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:40:15.076260 kernel: ftrace: allocating 34612 entries in 136 pages Sep 10 00:40:15.076269 kernel: ftrace: allocated 136 pages with 2 groups Sep 10 00:40:15.076278 kernel: rcu: Hierarchical RCU implementation. Sep 10 00:40:15.076288 kernel: rcu: RCU event tracing is enabled. Sep 10 00:40:15.076297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:40:15.076307 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:40:15.076316 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:40:15.076328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:40:15.076337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:40:15.076346 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:40:15.076355 kernel: random: crng init done Sep 10 00:40:15.076373 kernel: Console: colour VGA+ 80x25 Sep 10 00:40:15.076383 kernel: printk: console [ttyS0] enabled Sep 10 00:40:15.076392 kernel: ACPI: Core revision 20210730 Sep 10 00:40:15.076401 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:40:15.076410 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:40:15.076421 kernel: x2apic enabled Sep 10 00:40:15.076430 kernel: Switched APIC routing to physical x2apic. Sep 10 00:40:15.076443 kernel: kvm-guest: setup PV IPIs Sep 10 00:40:15.076452 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:40:15.076461 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:40:15.076474 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 10 00:40:15.076483 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:40:15.076492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:40:15.076501 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:40:15.076519 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:40:15.076529 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:40:15.076539 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:40:15.076550 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:40:15.076560 kernel: active return thunk: retbleed_return_thunk Sep 10 00:40:15.076569 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:40:15.076579 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:40:15.076589 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 10 00:40:15.076599 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:40:15.076610 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:40:15.076620 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:40:15.076630 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:40:15.076639 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 00:40:15.076649 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:40:15.076658 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:40:15.076668 kernel: LSM: Security Framework initializing Sep 10 00:40:15.076679 kernel: SELinux: Initializing. Sep 10 00:40:15.076688 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:40:15.076698 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:40:15.076707 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:40:15.076717 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:40:15.076726 kernel: ... version: 0 Sep 10 00:40:15.076736 kernel: ... bit width: 48 Sep 10 00:40:15.076746 kernel: ... generic registers: 6 Sep 10 00:40:15.076755 kernel: ... value mask: 0000ffffffffffff Sep 10 00:40:15.076765 kernel: ... max period: 00007fffffffffff Sep 10 00:40:15.076776 kernel: ... fixed-purpose events: 0 Sep 10 00:40:15.076785 kernel: ... event mask: 000000000000003f Sep 10 00:40:15.076795 kernel: signal: max sigframe size: 1776 Sep 10 00:40:15.076804 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:40:15.076814 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:40:15.076823 kernel: x86: Booting SMP configuration: Sep 10 00:40:15.076832 kernel: .... node #0, CPUs: #1 Sep 10 00:40:15.076841 kernel: kvm-clock: cpu 1, msr 6a19f041, secondary cpu clock Sep 10 00:40:15.076851 kernel: kvm-guest: setup async PF for cpu 1 Sep 10 00:40:15.076862 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 10 00:40:15.076872 kernel: #2 Sep 10 00:40:15.076881 kernel: kvm-clock: cpu 2, msr 6a19f081, secondary cpu clock Sep 10 00:40:15.076891 kernel: kvm-guest: setup async PF for cpu 2 Sep 10 00:40:15.076923 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 10 00:40:15.076937 kernel: #3 Sep 10 00:40:15.076952 kernel: kvm-clock: cpu 3, msr 6a19f0c1, secondary cpu clock Sep 10 00:40:15.076961 kernel: kvm-guest: setup async PF for cpu 3 Sep 10 00:40:15.076971 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 10 00:40:15.076983 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:40:15.076992 kernel: smpboot: Max logical packages: 1 Sep 10 00:40:15.077002 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 10 00:40:15.077011 kernel: devtmpfs: initialized Sep 10 00:40:15.077020 kernel: x86/mm: Memory block size: 128MB Sep 10 00:40:15.077030 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:40:15.077040 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:40:15.077050 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:40:15.077059 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:40:15.077072 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:40:15.077081 kernel: audit: type=2000 audit(1757464815.381:1): state=initialized audit_enabled=0 res=1 Sep 10 00:40:15.077091 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:40:15.077100 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:40:15.077110 kernel: cpuidle: using governor menu Sep 10 00:40:15.077119 kernel: ACPI: bus type PCI registered Sep 10 00:40:15.077129 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:40:15.077138 kernel: dca service started, version 1.12.1 Sep 10 00:40:15.077148 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:40:15.077159 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 10 00:40:15.077169 kernel: PCI: Using configuration type 1 for base access Sep 10 00:40:15.077179 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:40:15.077189 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:40:15.077198 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:40:15.077208 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:40:15.077217 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:40:15.077226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:40:15.077236 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 10 00:40:15.077247 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 10 00:40:15.077257 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 10 00:40:15.077266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:40:15.077276 kernel: ACPI: Interpreter enabled Sep 10 00:40:15.077285 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:40:15.077294 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:40:15.077304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:40:15.077314 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:40:15.077323 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:40:15.077584 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:40:15.077690 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:40:15.077793 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:40:15.077807 kernel: PCI host bridge to bus 0000:00 Sep 10 00:40:15.077973 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:40:15.078067 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:40:15.078159 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:40:15.078239 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:40:15.078333 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:40:15.078434 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:40:15.078521 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:40:15.078665 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:40:15.078785 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:40:15.078912 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:40:15.079014 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:40:15.079108 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:40:15.079212 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:40:15.079331 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:40:15.079443 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:40:15.079631 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:40:15.079747 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:40:15.079869 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:40:15.080018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:40:15.080128 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:40:15.080256 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:40:15.080412 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:40:15.080582 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:40:15.080754 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:40:15.080878 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:40:15.081073 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:40:15.081225 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:40:15.081326 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:40:15.081450 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:40:15.081554 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:40:15.081666 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:40:15.081793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:40:15.081919 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:40:15.081932 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:40:15.081940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:40:15.081948 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:40:15.081955 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:40:15.081984 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:40:15.081992 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:40:15.082000 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:40:15.082007 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:40:15.082023 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:40:15.082035 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:40:15.082043 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:40:15.082050 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:40:15.082057 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:40:15.082079 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:40:15.082088 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:40:15.082095 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:40:15.082103 kernel: iommu: Default domain type: Translated Sep 10 00:40:15.082111 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:40:15.082227 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:40:15.082336 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:40:15.082425 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:40:15.082439 kernel: vgaarb: loaded Sep 10 00:40:15.082447 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 10 00:40:15.082454 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 10 00:40:15.082467 kernel: PTP clock support registered Sep 10 00:40:15.082475 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:40:15.082482 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:40:15.082490 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:40:15.082497 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:40:15.082505 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:40:15.082514 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:40:15.082535 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:40:15.082543 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:40:15.082551 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:40:15.082558 kernel: pnp: PnP ACPI init Sep 10 00:40:15.082716 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:40:15.082729 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:40:15.082746 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:40:15.082762 kernel: NET: Registered PF_INET protocol family Sep 10 00:40:15.082769 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:40:15.082777 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:40:15.082784 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:40:15.082791 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:40:15.082799 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 10 00:40:15.082807 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:40:15.082814 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:40:15.082822 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:40:15.082831 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:40:15.082838 kernel: NET: Registered PF_XDP protocol family Sep 10 00:40:15.082935 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:40:15.083006 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:40:15.083073 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:40:15.083141 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:40:15.083207 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:40:15.083274 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:40:15.083284 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:40:15.083294 kernel: Initialise system trusted keyrings Sep 10 00:40:15.083302 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:40:15.083309 kernel: Key type asymmetric registered Sep 10 00:40:15.083317 kernel: Asymmetric key parser 'x509' registered Sep 10 00:40:15.083324 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 00:40:15.083331 kernel: io scheduler mq-deadline registered Sep 10 00:40:15.083339 kernel: io scheduler kyber registered Sep 10 00:40:15.083346 kernel: io scheduler bfq registered Sep 10 00:40:15.083353 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:40:15.083370 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:40:15.083379 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:40:15.083386 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:40:15.083394 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:40:15.083401 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:40:15.083408 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:40:15.083416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:40:15.083423 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:40:15.083512 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:40:15.083527 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:40:15.083596 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:40:15.083666 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:40:14 UTC (1757464814) Sep 10 00:40:15.083735 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:40:15.083745 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:40:15.083752 kernel: Segment Routing with IPv6 Sep 10 00:40:15.083759 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:40:15.083766 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:40:15.083776 kernel: Key type dns_resolver registered Sep 10 00:40:15.083783 kernel: IPI shorthand broadcast: enabled Sep 10 00:40:15.083791 kernel: sched_clock: Marking stable (458488901, 124040035)->(656030146, -73501210) Sep 10 00:40:15.083798 kernel: registered taskstats version 1 Sep 10 00:40:15.083805 kernel: Loading compiled-in X.509 certificates Sep 10 00:40:15.083813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 3af57cd809cc9e43d7af9f276bb20b532a4147af' Sep 10 00:40:15.083820 kernel: Key type .fscrypt registered Sep 10 00:40:15.083827 kernel: Key type fscrypt-provisioning registered Sep 10 00:40:15.083835 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:40:15.083844 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:40:15.083851 kernel: ima: No architecture policies found Sep 10 00:40:15.083859 kernel: clk: Disabling unused clocks Sep 10 00:40:15.083866 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 10 00:40:15.083873 kernel: Write protecting the kernel read-only data: 28672k Sep 10 00:40:15.083881 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 10 00:40:15.083888 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 10 00:40:15.083926 kernel: Run /init as init process Sep 10 00:40:15.083935 kernel: with arguments: Sep 10 00:40:15.083942 kernel: /init Sep 10 00:40:15.083949 kernel: with environment: Sep 10 00:40:15.083956 kernel: HOME=/ Sep 10 00:40:15.083963 kernel: TERM=linux Sep 10 00:40:15.083970 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:40:15.083984 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:40:15.083995 systemd[1]: Detected virtualization kvm. Sep 10 00:40:15.084005 systemd[1]: Detected architecture x86-64. Sep 10 00:40:15.084012 systemd[1]: Running in initrd. Sep 10 00:40:15.084020 systemd[1]: No hostname configured, using default hostname. Sep 10 00:40:15.084027 systemd[1]: Hostname set to . Sep 10 00:40:15.084035 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:40:15.084043 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:40:15.084051 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:40:15.084059 systemd[1]: Reached target cryptsetup.target. Sep 10 00:40:15.084066 systemd[1]: Reached target paths.target. Sep 10 00:40:15.084076 systemd[1]: Reached target slices.target. Sep 10 00:40:15.084094 systemd[1]: Reached target swap.target. Sep 10 00:40:15.084103 systemd[1]: Reached target timers.target. Sep 10 00:40:15.084111 systemd[1]: Listening on iscsid.socket. Sep 10 00:40:15.084119 systemd[1]: Listening on iscsiuio.socket. Sep 10 00:40:15.084129 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:40:15.084137 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:40:15.084145 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:40:15.084153 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:40:15.084161 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:40:15.084169 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:40:15.084176 systemd[1]: Reached target sockets.target. Sep 10 00:40:15.084185 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:40:15.084192 systemd[1]: Finished network-cleanup.service. Sep 10 00:40:15.084202 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:40:15.084210 systemd[1]: Starting systemd-journald.service... Sep 10 00:40:15.084218 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:40:15.084226 systemd[1]: Starting systemd-resolved.service... Sep 10 00:40:15.084234 systemd[1]: Starting systemd-vconsole-setup.service... Sep 10 00:40:15.084241 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:40:15.084249 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:40:15.084257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:40:15.084266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:40:15.084282 systemd-journald[197]: Journal started Sep 10 00:40:15.084328 systemd-journald[197]: Runtime Journal (/run/log/journal/848939a85b034eb5a54ab7efe5bd209b) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:40:15.070832 systemd-modules-load[198]: Inserted module 'overlay' Sep 10 00:40:15.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.089618 systemd-resolved[199]: Positive Trust Anchors: Sep 10 00:40:15.115443 kernel: audit: type=1130 audit(1757464815.109:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.115459 systemd[1]: Started systemd-journald.service. Sep 10 00:40:15.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.089633 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:40:15.122912 kernel: audit: type=1130 audit(1757464815.114:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.122944 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:40:15.122973 kernel: audit: type=1130 audit(1757464815.121:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.089661 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:40:15.092311 systemd-resolved[199]: Defaulting to hostname 'linux'. Sep 10 00:40:15.115562 systemd[1]: Started systemd-resolved.service. Sep 10 00:40:15.123145 systemd[1]: Finished systemd-vconsole-setup.service. Sep 10 00:40:15.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.135812 systemd[1]: Reached target nss-lookup.target. Sep 10 00:40:15.141825 kernel: audit: type=1130 audit(1757464815.135:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.141841 kernel: Bridge firewalling registered Sep 10 00:40:15.139153 systemd-modules-load[198]: Inserted module 'br_netfilter' Sep 10 00:40:15.140478 systemd[1]: Starting dracut-cmdline-ask.service... Sep 10 00:40:15.155623 systemd[1]: Finished dracut-cmdline-ask.service. Sep 10 00:40:15.161178 kernel: audit: type=1130 audit(1757464815.155:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.157591 systemd[1]: Starting dracut-cmdline.service... Sep 10 00:40:15.162922 kernel: SCSI subsystem initialized Sep 10 00:40:15.168128 dracut-cmdline[218]: dracut-dracut-053 Sep 10 00:40:15.171101 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:40:15.179496 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:40:15.180024 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:40:15.180061 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 10 00:40:15.185705 systemd-modules-load[198]: Inserted module 'dm_multipath' Sep 10 00:40:15.187148 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:40:15.193235 kernel: audit: type=1130 audit(1757464815.187:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.189615 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:40:15.202934 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:40:15.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.207927 kernel: audit: type=1130 audit(1757464815.203:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.251930 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:40:15.268925 kernel: iscsi: registered transport (tcp) Sep 10 00:40:15.289917 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:40:15.289935 kernel: QLogic iSCSI HBA Driver Sep 10 00:40:15.327568 systemd[1]: Finished dracut-cmdline.service. Sep 10 00:40:15.332059 kernel: audit: type=1130 audit(1757464815.326:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.332061 systemd[1]: Starting dracut-pre-udev.service... Sep 10 00:40:15.383935 kernel: raid6: avx2x4 gen() 29813 MB/s Sep 10 00:40:15.400926 kernel: raid6: avx2x4 xor() 7741 MB/s Sep 10 00:40:15.417917 kernel: raid6: avx2x2 gen() 32115 MB/s Sep 10 00:40:15.434927 kernel: raid6: avx2x2 xor() 19003 MB/s Sep 10 00:40:15.451922 kernel: raid6: avx2x1 gen() 26411 MB/s Sep 10 00:40:15.469077 kernel: raid6: avx2x1 xor() 14657 MB/s Sep 10 00:40:15.485949 kernel: raid6: sse2x4 gen() 13949 MB/s Sep 10 00:40:15.502920 kernel: raid6: sse2x4 xor() 7501 MB/s Sep 10 00:40:15.519946 kernel: raid6: sse2x2 gen() 16233 MB/s Sep 10 00:40:15.536929 kernel: raid6: sse2x2 xor() 9711 MB/s Sep 10 00:40:15.553925 kernel: raid6: sse2x1 gen() 11818 MB/s Sep 10 00:40:15.571262 kernel: raid6: sse2x1 xor() 7743 MB/s Sep 10 00:40:15.571285 kernel: raid6: using algorithm avx2x2 gen() 32115 MB/s Sep 10 00:40:15.571300 kernel: raid6: .... xor() 19003 MB/s, rmw enabled Sep 10 00:40:15.571952 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:40:15.584922 kernel: xor: automatically using best checksumming function avx Sep 10 00:40:15.680940 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 10 00:40:15.688539 systemd[1]: Finished dracut-pre-udev.service. Sep 10 00:40:15.693126 kernel: audit: type=1130 audit(1757464815.689:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.692000 audit: BPF prog-id=7 op=LOAD Sep 10 00:40:15.692000 audit: BPF prog-id=8 op=LOAD Sep 10 00:40:15.693624 systemd[1]: Starting systemd-udevd.service... Sep 10 00:40:15.707070 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 10 00:40:15.712490 systemd[1]: Started systemd-udevd.service. Sep 10 00:40:15.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.715038 systemd[1]: Starting dracut-pre-trigger.service... Sep 10 00:40:15.725979 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 10 00:40:15.752546 systemd[1]: Finished dracut-pre-trigger.service. Sep 10 00:40:15.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.754231 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:40:15.795728 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:40:15.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:15.829664 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:40:15.835795 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:40:15.835811 kernel: GPT:9289727 != 19775487 Sep 10 00:40:15.835825 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:40:15.835835 kernel: GPT:9289727 != 19775487 Sep 10 00:40:15.835843 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:40:15.835852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:40:15.838932 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:40:15.850941 kernel: libata version 3.00 loaded. Sep 10 00:40:15.858917 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:40:15.858949 kernel: AES CTR mode by8 optimization enabled Sep 10 00:40:15.863926 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:40:15.868852 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:40:15.868867 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:40:15.868997 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:40:15.869082 kernel: scsi host0: ahci Sep 10 00:40:15.869185 kernel: scsi host1: ahci Sep 10 00:40:15.869281 kernel: scsi host2: ahci Sep 10 00:40:15.869388 kernel: scsi host3: ahci Sep 10 00:40:15.869495 kernel: scsi host4: ahci Sep 10 00:40:15.869584 kernel: scsi host5: ahci Sep 10 00:40:15.869680 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:40:15.869690 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:40:15.869700 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:40:15.869708 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:40:15.869720 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:40:15.869729 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:40:15.869737 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (439) Sep 10 00:40:15.863581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 10 00:40:15.929448 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 10 00:40:15.935381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 10 00:40:15.941783 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 10 00:40:15.946843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:40:15.948815 systemd[1]: Starting disk-uuid.service... Sep 10 00:40:15.958518 disk-uuid[523]: Primary Header is updated. Sep 10 00:40:15.958518 disk-uuid[523]: Secondary Entries is updated. Sep 10 00:40:15.958518 disk-uuid[523]: Secondary Header is updated. Sep 10 00:40:15.961996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:40:15.965918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:40:16.179323 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:40:16.179401 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:40:16.179411 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:40:16.180920 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:40:16.181927 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:40:16.182915 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:40:16.183936 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:40:16.183967 kernel: ata3.00: applying bridge limits Sep 10 00:40:16.185254 kernel: ata3.00: configured for UDMA/100 Sep 10 00:40:16.185915 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:40:16.219915 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:40:16.237492 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:40:16.237505 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:40:16.966593 disk-uuid[524]: The operation has completed successfully. Sep 10 00:40:16.967944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:40:16.993508 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:40:16.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:16.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:16.993609 systemd[1]: Finished disk-uuid.service. Sep 10 00:40:17.002526 systemd[1]: Starting verity-setup.service... Sep 10 00:40:17.016936 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:40:17.038548 systemd[1]: Found device dev-mapper-usr.device. Sep 10 00:40:17.040939 systemd[1]: Mounting sysusr-usr.mount... Sep 10 00:40:17.043294 systemd[1]: Finished verity-setup.service. Sep 10 00:40:17.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.107927 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 10 00:40:17.107964 systemd[1]: Mounted sysusr-usr.mount. Sep 10 00:40:17.108419 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 10 00:40:17.109464 systemd[1]: Starting ignition-setup.service... Sep 10 00:40:17.112536 systemd[1]: Starting parse-ip-for-networkd.service... Sep 10 00:40:17.123609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:40:17.123669 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:40:17.123679 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:40:17.131685 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:40:17.139089 systemd[1]: Finished ignition-setup.service. Sep 10 00:40:17.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.140450 systemd[1]: Starting ignition-fetch-offline.service... Sep 10 00:40:17.177721 ignition[643]: Ignition 2.14.0 Sep 10 00:40:17.177750 ignition[643]: Stage: fetch-offline Sep 10 00:40:17.177815 ignition[643]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:17.177827 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:17.177981 ignition[643]: parsed url from cmdline: "" Sep 10 00:40:17.177986 ignition[643]: no config URL provided Sep 10 00:40:17.177993 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:40:17.178003 ignition[643]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:40:17.178035 ignition[643]: op(1): [started] loading QEMU firmware config module Sep 10 00:40:17.178041 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:40:17.181719 ignition[643]: op(1): [finished] loading QEMU firmware config module Sep 10 00:40:17.191235 systemd[1]: Finished parse-ip-for-networkd.service. Sep 10 00:40:17.193603 systemd[1]: Starting systemd-networkd.service... Sep 10 00:40:17.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.192000 audit: BPF prog-id=9 op=LOAD Sep 10 00:40:17.226832 ignition[643]: parsing config with SHA512: 2c3491080eee21c418ca6d3a5a2cfbd67b4509b57a1f87903f5a252550c608fd55b370529c1520c6159d9a71f514e76b6816f473901e94c62ba5787bb0bdc510 Sep 10 00:40:17.233498 unknown[643]: fetched base config from "system" Sep 10 00:40:17.233513 unknown[643]: fetched user config from "qemu" Sep 10 00:40:17.234123 ignition[643]: fetch-offline: fetch-offline passed Sep 10 00:40:17.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.235333 systemd[1]: Finished ignition-fetch-offline.service. Sep 10 00:40:17.234178 ignition[643]: Ignition finished successfully Sep 10 00:40:17.249434 systemd-networkd[716]: lo: Link UP Sep 10 00:40:17.249443 systemd-networkd[716]: lo: Gained carrier Sep 10 00:40:17.251211 systemd-networkd[716]: Enumeration completed Sep 10 00:40:17.251305 systemd[1]: Started systemd-networkd.service. Sep 10 00:40:17.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.251966 systemd[1]: Reached target network.target. Sep 10 00:40:17.253663 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:40:17.254493 systemd[1]: Starting ignition-kargs.service... Sep 10 00:40:17.255929 systemd[1]: Starting iscsiuio.service... Sep 10 00:40:17.258475 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:40:17.261329 systemd-networkd[716]: eth0: Link UP Sep 10 00:40:17.261572 systemd[1]: Started iscsiuio.service. Sep 10 00:40:17.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.262563 systemd-networkd[716]: eth0: Gained carrier Sep 10 00:40:17.263848 systemd[1]: Starting iscsid.service... Sep 10 00:40:17.266673 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:40:17.266673 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 10 00:40:17.266673 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 10 00:40:17.266673 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 10 00:40:17.266673 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:40:17.266673 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 10 00:40:17.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.268104 systemd[1]: Started iscsid.service. Sep 10 00:40:17.270638 ignition[718]: Ignition 2.14.0 Sep 10 00:40:17.273786 systemd[1]: Starting dracut-initqueue.service... Sep 10 00:40:17.270644 ignition[718]: Stage: kargs Sep 10 00:40:17.274981 systemd[1]: Finished ignition-kargs.service. Sep 10 00:40:17.270741 ignition[718]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:17.277595 systemd[1]: Starting ignition-disks.service... Sep 10 00:40:17.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.270752 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:17.282055 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:40:17.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.271931 ignition[718]: kargs: kargs passed Sep 10 00:40:17.286995 systemd[1]: Finished dracut-initqueue.service. Sep 10 00:40:17.271967 ignition[718]: Ignition finished successfully Sep 10 00:40:17.288080 systemd[1]: Finished ignition-disks.service. Sep 10 00:40:17.284891 ignition[730]: Ignition 2.14.0 Sep 10 00:40:17.290006 systemd[1]: Reached target initrd-root-device.target. Sep 10 00:40:17.284911 ignition[730]: Stage: disks Sep 10 00:40:17.292006 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:40:17.285010 ignition[730]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:17.292828 systemd[1]: Reached target local-fs.target. Sep 10 00:40:17.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.285018 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:17.294229 systemd[1]: Reached target remote-fs-pre.target. Sep 10 00:40:17.285854 ignition[730]: disks: disks passed Sep 10 00:40:17.295762 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:40:17.285887 ignition[730]: Ignition finished successfully Sep 10 00:40:17.296171 systemd[1]: Reached target remote-fs.target. Sep 10 00:40:17.296339 systemd[1]: Reached target sysinit.target. Sep 10 00:40:17.296498 systemd[1]: Reached target basic.target. Sep 10 00:40:17.297544 systemd[1]: Starting dracut-pre-mount.service... Sep 10 00:40:17.304460 systemd[1]: Finished dracut-pre-mount.service. Sep 10 00:40:17.306027 systemd[1]: Starting systemd-fsck-root.service... Sep 10 00:40:17.318682 systemd-fsck[751]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 10 00:40:17.323616 systemd[1]: Finished systemd-fsck-root.service. Sep 10 00:40:17.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.325344 systemd[1]: Mounting sysroot.mount... Sep 10 00:40:17.331930 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 10 00:40:17.331147 systemd[1]: Mounted sysroot.mount. Sep 10 00:40:17.331932 systemd[1]: Reached target initrd-root-fs.target. Sep 10 00:40:17.334961 systemd[1]: Mounting sysroot-usr.mount... Sep 10 00:40:17.336675 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 10 00:40:17.336719 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:40:17.336742 systemd[1]: Reached target ignition-diskful.target. Sep 10 00:40:17.341971 systemd[1]: Mounted sysroot-usr.mount. Sep 10 00:40:17.343967 systemd[1]: Starting initrd-setup-root.service... Sep 10 00:40:17.348276 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:40:17.351853 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:40:17.355774 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:40:17.359623 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:40:17.385609 systemd[1]: Finished initrd-setup-root.service. Sep 10 00:40:17.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.387851 systemd[1]: Starting ignition-mount.service... Sep 10 00:40:17.389878 systemd[1]: Starting sysroot-boot.service... Sep 10 00:40:17.392876 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Sep 10 00:40:17.400879 ignition[803]: INFO : Ignition 2.14.0 Sep 10 00:40:17.400879 ignition[803]: INFO : Stage: mount Sep 10 00:40:17.402785 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:17.402785 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:17.402785 ignition[803]: INFO : mount: mount passed Sep 10 00:40:17.402785 ignition[803]: INFO : Ignition finished successfully Sep 10 00:40:17.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:17.403126 systemd[1]: Finished ignition-mount.service. Sep 10 00:40:17.412058 systemd[1]: Finished sysroot-boot.service. Sep 10 00:40:17.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:18.052504 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 10 00:40:18.058943 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 10 00:40:18.058978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:40:18.061581 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:40:18.061605 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:40:18.065068 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 10 00:40:18.066272 systemd[1]: Starting ignition-files.service... Sep 10 00:40:18.079016 ignition[832]: INFO : Ignition 2.14.0 Sep 10 00:40:18.079016 ignition[832]: INFO : Stage: files Sep 10 00:40:18.081471 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:18.081471 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:18.081471 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:40:18.081471 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:40:18.081471 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:40:18.090249 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:40:18.090249 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:40:18.090249 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:40:18.090249 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:40:18.090249 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:40:18.090249 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:40:18.090249 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:40:18.084068 unknown[832]: wrote ssh authorized keys file for user: core Sep 10 00:40:18.158423 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:40:18.653181 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:40:18.655465 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:40:18.852157 systemd-networkd[716]: eth0: Gained IPv6LL Sep 10 00:40:19.250921 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:40:20.195906 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:40:20.195906 ignition[832]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 10 00:40:20.199931 ignition[832]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:40:20.199931 ignition[832]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:40:20.199931 ignition[832]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 10 00:40:20.199931 ignition[832]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 10 00:40:20.207137 ignition[832]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:40:20.207137 ignition[832]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:40:20.207137 ignition[832]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 10 00:40:20.207137 ignition[832]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 10 00:40:20.207137 ignition[832]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:40:20.215451 ignition[832]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:40:20.253002 ignition[832]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:40:20.255432 ignition[832]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:40:20.255432 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:40:20.255432 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:40:20.255432 ignition[832]: INFO : files: files passed Sep 10 00:40:20.262021 ignition[832]: INFO : Ignition finished successfully Sep 10 00:40:20.264295 systemd[1]: Finished ignition-files.service. Sep 10 00:40:20.271456 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 10 00:40:20.271502 kernel: audit: type=1130 audit(1757464820.263:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.265592 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 10 00:40:20.271425 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 10 00:40:20.276726 initrd-setup-root-after-ignition[855]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 10 00:40:20.281860 kernel: audit: type=1130 audit(1757464820.275:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.272252 systemd[1]: Starting ignition-quench.service... Sep 10 00:40:20.289275 kernel: audit: type=1130 audit(1757464820.280:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.289294 kernel: audit: type=1131 audit(1757464820.280:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.289384 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:40:20.273706 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 10 00:40:20.276953 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:40:20.277025 systemd[1]: Finished ignition-quench.service. Sep 10 00:40:20.282014 systemd[1]: Reached target ignition-complete.target. Sep 10 00:40:20.289940 systemd[1]: Starting initrd-parse-etc.service... Sep 10 00:40:20.302511 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:40:20.302597 systemd[1]: Finished initrd-parse-etc.service. Sep 10 00:40:20.311851 kernel: audit: type=1130 audit(1757464820.304:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.311871 kernel: audit: type=1131 audit(1757464820.304:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.304631 systemd[1]: Reached target initrd-fs.target. Sep 10 00:40:20.311870 systemd[1]: Reached target initrd.target. Sep 10 00:40:20.312686 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 10 00:40:20.313487 systemd[1]: Starting dracut-pre-pivot.service... Sep 10 00:40:20.323484 systemd[1]: Finished dracut-pre-pivot.service. Sep 10 00:40:20.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.325796 systemd[1]: Starting initrd-cleanup.service... Sep 10 00:40:20.329717 kernel: audit: type=1130 audit(1757464820.325:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.334019 systemd[1]: Stopped target nss-lookup.target. Sep 10 00:40:20.334937 systemd[1]: Stopped target remote-cryptsetup.target. Sep 10 00:40:20.336511 systemd[1]: Stopped target timers.target. Sep 10 00:40:20.338090 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:40:20.344255 kernel: audit: type=1131 audit(1757464820.338:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.338209 systemd[1]: Stopped dracut-pre-pivot.service. Sep 10 00:40:20.339625 systemd[1]: Stopped target initrd.target. Sep 10 00:40:20.344335 systemd[1]: Stopped target basic.target. Sep 10 00:40:20.345864 systemd[1]: Stopped target ignition-complete.target. Sep 10 00:40:20.347424 systemd[1]: Stopped target ignition-diskful.target. Sep 10 00:40:20.348956 systemd[1]: Stopped target initrd-root-device.target. Sep 10 00:40:20.350605 systemd[1]: Stopped target remote-fs.target. Sep 10 00:40:20.352197 systemd[1]: Stopped target remote-fs-pre.target. Sep 10 00:40:20.354077 systemd[1]: Stopped target sysinit.target. Sep 10 00:40:20.355595 systemd[1]: Stopped target local-fs.target. Sep 10 00:40:20.357262 systemd[1]: Stopped target local-fs-pre.target. Sep 10 00:40:20.358720 systemd[1]: Stopped target swap.target. Sep 10 00:40:20.366243 kernel: audit: type=1131 audit(1757464820.360:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.360141 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:40:20.360262 systemd[1]: Stopped dracut-pre-mount.service. Sep 10 00:40:20.372394 kernel: audit: type=1131 audit(1757464820.367:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.361983 systemd[1]: Stopped target cryptsetup.target. Sep 10 00:40:20.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.366302 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:40:20.366394 systemd[1]: Stopped dracut-initqueue.service. Sep 10 00:40:20.368153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:40:20.368264 systemd[1]: Stopped ignition-fetch-offline.service. Sep 10 00:40:20.372524 systemd[1]: Stopped target paths.target. Sep 10 00:40:20.373937 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:40:20.377983 systemd[1]: Stopped systemd-ask-password-console.path. Sep 10 00:40:20.379528 systemd[1]: Stopped target slices.target. Sep 10 00:40:20.381249 systemd[1]: Stopped target sockets.target. Sep 10 00:40:20.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.382915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:40:20.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.383034 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 10 00:40:20.389547 iscsid[722]: iscsid shutting down. Sep 10 00:40:20.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.384577 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:40:20.384664 systemd[1]: Stopped ignition-files.service. Sep 10 00:40:20.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.386888 systemd[1]: Stopping ignition-mount.service... Sep 10 00:40:20.387996 systemd[1]: Stopping iscsid.service... Sep 10 00:40:20.390204 systemd[1]: Stopping sysroot-boot.service... Sep 10 00:40:20.391069 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:40:20.398817 ignition[872]: INFO : Ignition 2.14.0 Sep 10 00:40:20.398817 ignition[872]: INFO : Stage: umount Sep 10 00:40:20.398817 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:40:20.398817 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:40:20.391255 systemd[1]: Stopped systemd-udev-trigger.service. Sep 10 00:40:20.403791 ignition[872]: INFO : umount: umount passed Sep 10 00:40:20.403791 ignition[872]: INFO : Ignition finished successfully Sep 10 00:40:20.392256 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:40:20.392341 systemd[1]: Stopped dracut-pre-trigger.service. Sep 10 00:40:20.409205 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:40:20.410827 systemd[1]: iscsid.service: Deactivated successfully. Sep 10 00:40:20.411813 systemd[1]: Stopped iscsid.service. Sep 10 00:40:20.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.413659 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:40:20.414799 systemd[1]: Stopped ignition-mount.service. Sep 10 00:40:20.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.416704 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:40:20.417639 systemd[1]: Closed iscsid.socket. Sep 10 00:40:20.419076 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:40:20.419126 systemd[1]: Stopped ignition-disks.service. Sep 10 00:40:20.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.421667 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:40:20.421712 systemd[1]: Stopped ignition-kargs.service. Sep 10 00:40:20.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.424262 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:40:20.424306 systemd[1]: Stopped ignition-setup.service. Sep 10 00:40:20.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.426973 systemd[1]: Stopping iscsiuio.service... Sep 10 00:40:20.428610 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:40:20.429559 systemd[1]: Finished initrd-cleanup.service. Sep 10 00:40:20.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.431492 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 10 00:40:20.432485 systemd[1]: Stopped iscsiuio.service. Sep 10 00:40:20.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.434686 systemd[1]: Stopped target network.target. Sep 10 00:40:20.436316 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:40:20.436352 systemd[1]: Closed iscsiuio.socket. Sep 10 00:40:20.438577 systemd[1]: Stopping systemd-networkd.service... Sep 10 00:40:20.440459 systemd[1]: Stopping systemd-resolved.service... Sep 10 00:40:20.441949 systemd-networkd[716]: eth0: DHCPv6 lease lost Sep 10 00:40:20.443401 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:40:20.443522 systemd[1]: Stopped systemd-networkd.service. Sep 10 00:40:20.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.444363 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:40:20.444393 systemd[1]: Closed systemd-networkd.socket. Sep 10 00:40:20.449632 systemd[1]: Stopping network-cleanup.service... Sep 10 00:40:20.448000 audit: BPF prog-id=9 op=UNLOAD Sep 10 00:40:20.450140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:40:20.450184 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 10 00:40:20.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.450606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:40:20.450644 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:40:20.456606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:40:20.456653 systemd[1]: Stopped systemd-modules-load.service. Sep 10 00:40:20.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.459652 systemd[1]: Stopping systemd-udevd.service... Sep 10 00:40:20.462200 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:40:20.463853 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:40:20.463969 systemd[1]: Stopped systemd-resolved.service. Sep 10 00:40:20.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.468342 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:40:20.468434 systemd[1]: Stopped network-cleanup.service. Sep 10 00:40:20.467000 audit: BPF prog-id=6 op=UNLOAD Sep 10 00:40:20.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.472536 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:40:20.472661 systemd[1]: Stopped systemd-udevd.service. Sep 10 00:40:20.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.474857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:40:20.474937 systemd[1]: Closed systemd-udevd-control.socket. Sep 10 00:40:20.475597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:40:20.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.475627 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 10 00:40:20.477133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:40:20.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.477173 systemd[1]: Stopped dracut-pre-udev.service. Sep 10 00:40:20.477474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:40:20.477505 systemd[1]: Stopped dracut-cmdline.service. Sep 10 00:40:20.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.482263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:40:20.482323 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 10 00:40:20.487998 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 10 00:40:20.489657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:40:20.489705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 10 00:40:20.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.492595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:40:20.492636 systemd[1]: Stopped kmod-static-nodes.service. Sep 10 00:40:20.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.495211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:40:20.495256 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 10 00:40:20.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.498585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 00:40:20.500333 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:40:20.501266 systemd[1]: Stopped sysroot-boot.service. Sep 10 00:40:20.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.502833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:40:20.503884 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 10 00:40:20.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.505678 systemd[1]: Reached target initrd-switch-root.target. Sep 10 00:40:20.507397 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:40:20.507437 systemd[1]: Stopped initrd-setup-root.service. Sep 10 00:40:20.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:20.510466 systemd[1]: Starting initrd-switch-root.service... Sep 10 00:40:20.516294 systemd[1]: Switching root. Sep 10 00:40:20.519000 audit: BPF prog-id=5 op=UNLOAD Sep 10 00:40:20.519000 audit: BPF prog-id=4 op=UNLOAD Sep 10 00:40:20.519000 audit: BPF prog-id=3 op=UNLOAD Sep 10 00:40:20.520000 audit: BPF prog-id=8 op=UNLOAD Sep 10 00:40:20.520000 audit: BPF prog-id=7 op=UNLOAD Sep 10 00:40:20.537823 systemd-journald[197]: Journal stopped Sep 10 00:40:24.238363 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Sep 10 00:40:24.238438 kernel: SELinux: Class mctp_socket not defined in policy. Sep 10 00:40:24.238453 kernel: SELinux: Class anon_inode not defined in policy. Sep 10 00:40:24.238464 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 10 00:40:24.238474 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:40:24.238494 kernel: SELinux: policy capability open_perms=1 Sep 10 00:40:24.238533 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:40:24.238565 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:40:24.238598 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:40:24.238611 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:40:24.238622 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:40:24.238646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:40:24.238698 systemd[1]: Successfully loaded SELinux policy in 42.252ms. Sep 10 00:40:24.238758 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.645ms. Sep 10 00:40:24.238800 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:40:24.238815 systemd[1]: Detected virtualization kvm. Sep 10 00:40:24.238832 systemd[1]: Detected architecture x86-64. Sep 10 00:40:24.238844 systemd[1]: Detected first boot. Sep 10 00:40:24.238855 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:40:24.238868 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 10 00:40:24.238886 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:40:24.238946 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:40:24.238990 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:40:24.239007 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:40:24.239018 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:40:24.239028 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 10 00:40:24.239051 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 10 00:40:24.239072 systemd[1]: Created slice system-addon\x2drun.slice. Sep 10 00:40:24.239084 systemd[1]: Created slice system-getty.slice. Sep 10 00:40:24.239094 systemd[1]: Created slice system-modprobe.slice. Sep 10 00:40:24.239121 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 10 00:40:24.239153 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 10 00:40:24.239176 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 10 00:40:24.239207 systemd[1]: Created slice user.slice. Sep 10 00:40:24.239234 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:40:24.239248 systemd[1]: Started systemd-ask-password-wall.path. Sep 10 00:40:24.239260 systemd[1]: Set up automount boot.automount. Sep 10 00:40:24.239270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 10 00:40:24.239285 systemd[1]: Reached target integritysetup.target. Sep 10 00:40:24.239307 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:40:24.239338 systemd[1]: Reached target remote-fs.target. Sep 10 00:40:24.239370 systemd[1]: Reached target slices.target. Sep 10 00:40:24.239400 systemd[1]: Reached target swap.target. Sep 10 00:40:24.239426 systemd[1]: Reached target torcx.target. Sep 10 00:40:24.239441 systemd[1]: Reached target veritysetup.target. Sep 10 00:40:24.239451 systemd[1]: Listening on systemd-coredump.socket. Sep 10 00:40:24.239462 systemd[1]: Listening on systemd-initctl.socket. Sep 10 00:40:24.239472 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:40:24.239486 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:40:24.239497 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:40:24.239507 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:40:24.239538 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:40:24.239555 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:40:24.239566 systemd[1]: Listening on systemd-userdbd.socket. Sep 10 00:40:24.239596 systemd[1]: Mounting dev-hugepages.mount... Sep 10 00:40:24.239627 systemd[1]: Mounting dev-mqueue.mount... Sep 10 00:40:24.239654 systemd[1]: Mounting media.mount... Sep 10 00:40:24.239665 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:24.239679 systemd[1]: Mounting sys-kernel-debug.mount... Sep 10 00:40:24.239716 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 10 00:40:24.239750 systemd[1]: Mounting tmp.mount... Sep 10 00:40:24.239764 systemd[1]: Starting flatcar-tmpfiles.service... Sep 10 00:40:24.239774 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:40:24.239795 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:40:24.239818 systemd[1]: Starting modprobe@configfs.service... Sep 10 00:40:24.239853 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:40:24.239886 systemd[1]: Starting modprobe@drm.service... Sep 10 00:40:24.239927 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:40:24.239946 systemd[1]: Starting modprobe@fuse.service... Sep 10 00:40:24.239978 systemd[1]: Starting modprobe@loop.service... Sep 10 00:40:24.240007 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:40:24.240020 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 10 00:40:24.240055 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 10 00:40:24.240071 kernel: fuse: init (API version 7.34) Sep 10 00:40:24.240089 systemd[1]: Starting systemd-journald.service... Sep 10 00:40:24.240100 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:40:24.240118 systemd[1]: Starting systemd-network-generator.service... Sep 10 00:40:24.240141 kernel: loop: module loaded Sep 10 00:40:24.240151 systemd[1]: Starting systemd-remount-fs.service... Sep 10 00:40:24.240167 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:40:24.240198 systemd-journald[1014]: Journal started Sep 10 00:40:24.240275 systemd-journald[1014]: Runtime Journal (/run/log/journal/848939a85b034eb5a54ab7efe5bd209b) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:40:24.103000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:40:24.103000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 10 00:40:24.235000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:40:24.235000 audit[1014]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd53b485e0 a2=4000 a3=7ffd53b4867c items=0 ppid=1 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:24.235000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:40:24.246941 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:24.246979 systemd[1]: Started systemd-journald.service. Sep 10 00:40:24.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.249367 systemd[1]: Mounted dev-hugepages.mount. Sep 10 00:40:24.250457 systemd[1]: Mounted dev-mqueue.mount. Sep 10 00:40:24.251503 systemd[1]: Mounted media.mount. Sep 10 00:40:24.252298 systemd[1]: Mounted sys-kernel-debug.mount. Sep 10 00:40:24.253141 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 10 00:40:24.254027 systemd[1]: Mounted tmp.mount. Sep 10 00:40:24.255075 systemd[1]: Finished flatcar-tmpfiles.service. Sep 10 00:40:24.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.256314 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:40:24.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.257379 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:40:24.257579 systemd[1]: Finished modprobe@configfs.service. Sep 10 00:40:24.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.258640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:40:24.258803 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:40:24.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.259963 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:40:24.260177 systemd[1]: Finished modprobe@drm.service. Sep 10 00:40:24.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.261213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:40:24.261398 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:40:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.262561 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:40:24.262745 systemd[1]: Finished modprobe@fuse.service. Sep 10 00:40:24.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.263741 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:40:24.263990 systemd[1]: Finished modprobe@loop.service. Sep 10 00:40:24.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.265203 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:40:24.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.266467 systemd[1]: Finished systemd-network-generator.service. Sep 10 00:40:24.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.267864 systemd[1]: Finished systemd-remount-fs.service. Sep 10 00:40:24.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.269009 systemd[1]: Reached target network-pre.target. Sep 10 00:40:24.271258 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 10 00:40:24.273286 systemd[1]: Mounting sys-kernel-config.mount... Sep 10 00:40:24.274202 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:40:24.275941 systemd[1]: Starting systemd-hwdb-update.service... Sep 10 00:40:24.278152 systemd[1]: Starting systemd-journal-flush.service... Sep 10 00:40:24.279362 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:40:24.280461 systemd[1]: Starting systemd-random-seed.service... Sep 10 00:40:24.281817 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:40:24.283959 systemd-journald[1014]: Time spent on flushing to /var/log/journal/848939a85b034eb5a54ab7efe5bd209b is 17.626ms for 1038 entries. Sep 10 00:40:24.283959 systemd-journald[1014]: System Journal (/var/log/journal/848939a85b034eb5a54ab7efe5bd209b) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:40:24.320887 systemd-journald[1014]: Received client request to flush runtime journal. Sep 10 00:40:24.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.283132 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:40:24.286116 systemd[1]: Starting systemd-sysusers.service... Sep 10 00:40:24.289038 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 10 00:40:24.291478 systemd[1]: Mounted sys-kernel-config.mount. Sep 10 00:40:24.322107 udevadm[1061]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:40:24.295862 systemd[1]: Finished systemd-random-seed.service. Sep 10 00:40:24.296849 systemd[1]: Reached target first-boot-complete.target. Sep 10 00:40:24.306674 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:40:24.308196 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:40:24.310693 systemd[1]: Starting systemd-udev-settle.service... Sep 10 00:40:24.321644 systemd[1]: Finished systemd-journal-flush.service. Sep 10 00:40:24.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.328209 systemd[1]: Finished systemd-sysusers.service. Sep 10 00:40:24.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.330411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:40:24.348334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:40:24.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.862181 systemd[1]: Finished systemd-hwdb-update.service. Sep 10 00:40:24.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.864448 systemd[1]: Starting systemd-udevd.service... Sep 10 00:40:24.883178 systemd-udevd[1068]: Using default interface naming scheme 'v252'. Sep 10 00:40:24.896565 systemd[1]: Started systemd-udevd.service. Sep 10 00:40:24.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.898834 systemd[1]: Starting systemd-networkd.service... Sep 10 00:40:24.905135 systemd[1]: Starting systemd-userdbd.service... Sep 10 00:40:24.918327 systemd[1]: Found device dev-ttyS0.device. Sep 10 00:40:24.938726 systemd[1]: Started systemd-userdbd.service. Sep 10 00:40:24.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.960258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:40:24.970923 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:40:24.977940 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:40:24.987133 systemd-networkd[1075]: lo: Link UP Sep 10 00:40:24.987144 systemd-networkd[1075]: lo: Gained carrier Sep 10 00:40:24.987565 systemd-networkd[1075]: Enumeration completed Sep 10 00:40:24.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:24.987857 systemd[1]: Started systemd-networkd.service. Sep 10 00:40:24.989143 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:40:24.990354 systemd-networkd[1075]: eth0: Link UP Sep 10 00:40:24.990363 systemd-networkd[1075]: eth0: Gained carrier Sep 10 00:40:24.993000 audit[1081]: AVC avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 10 00:40:25.002078 systemd-networkd[1075]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:40:24.993000 audit[1081]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ab337abba0 a1=338ec a2=7fbbe3023bc5 a3=5 items=110 ppid=1068 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:24.993000 audit: CWD cwd="/" Sep 10 00:40:24.993000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=1 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=2 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=3 name=(null) inode=2024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=4 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=5 name=(null) inode=2025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=6 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=7 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=8 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=9 name=(null) inode=2027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=10 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=11 name=(null) inode=2028 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=12 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=13 name=(null) inode=2029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=14 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=15 name=(null) inode=2030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=16 name=(null) inode=2026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=17 name=(null) inode=2031 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=18 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=19 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=20 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=21 name=(null) inode=2033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=22 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=23 name=(null) inode=2034 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=24 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=25 name=(null) inode=2035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=26 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=27 name=(null) inode=2036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=28 name=(null) inode=2032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=29 name=(null) inode=2037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=30 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=31 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=32 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=33 name=(null) inode=2039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=34 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=35 name=(null) inode=2040 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=36 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=37 name=(null) inode=2041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=38 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=39 name=(null) inode=2042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=40 name=(null) inode=2038 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=41 name=(null) inode=2043 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=42 name=(null) inode=2023 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=43 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=44 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=45 name=(null) inode=2045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=46 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=47 name=(null) inode=2046 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=48 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=49 name=(null) inode=2047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=50 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=51 name=(null) inode=2048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=52 name=(null) inode=2044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=53 name=(null) inode=15361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=55 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=56 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=57 name=(null) inode=15363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=58 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=59 name=(null) inode=15364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=60 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=61 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=62 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=63 name=(null) inode=15366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=64 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=65 name=(null) inode=15367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=66 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=67 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=68 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=69 name=(null) inode=15369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=70 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=71 name=(null) inode=15370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=72 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=73 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=74 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=75 name=(null) inode=15372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=76 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=77 name=(null) inode=15373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=78 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=79 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=80 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=81 name=(null) inode=15375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=82 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=83 name=(null) inode=15376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=84 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=85 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=86 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=87 name=(null) inode=15378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=88 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=89 name=(null) inode=15379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=90 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=91 name=(null) inode=15380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=92 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=93 name=(null) inode=15381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=94 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=95 name=(null) inode=15382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=96 name=(null) inode=15362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=97 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=98 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=99 name=(null) inode=15384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=100 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=101 name=(null) inode=15385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=102 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=103 name=(null) inode=15386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=104 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=105 name=(null) inode=15387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=106 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=107 name=(null) inode=15388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PATH item=109 name=(null) inode=12050 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:40:24.993000 audit: PROCTITLE proctitle="(udev-worker)" Sep 10 00:40:25.069165 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:40:25.070084 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:40:25.070954 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:40:25.071996 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:40:25.075916 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:40:25.124952 kernel: kvm: Nested Virtualization enabled Sep 10 00:40:25.125136 kernel: SVM: kvm: Nested Paging enabled Sep 10 00:40:25.127465 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 10 00:40:25.127497 kernel: SVM: Virtual GIF supported Sep 10 00:40:25.145928 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:40:25.172480 systemd[1]: Finished systemd-udev-settle.service. Sep 10 00:40:25.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.175385 systemd[1]: Starting lvm2-activation-early.service... Sep 10 00:40:25.183474 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:40:25.214014 systemd[1]: Finished lvm2-activation-early.service. Sep 10 00:40:25.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.264566 systemd[1]: Reached target cryptsetup.target. Sep 10 00:40:25.266985 systemd[1]: Starting lvm2-activation.service... Sep 10 00:40:25.270576 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:40:25.300452 systemd[1]: Finished lvm2-activation.service. Sep 10 00:40:25.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.301723 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:40:25.302556 kernel: kauditd_printk_skb: 197 callbacks suppressed Sep 10 00:40:25.302612 kernel: audit: type=1130 audit(1757464825.300:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.306917 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:40:25.306954 systemd[1]: Reached target local-fs.target. Sep 10 00:40:25.307848 systemd[1]: Reached target machines.target. Sep 10 00:40:25.310201 systemd[1]: Starting ldconfig.service... Sep 10 00:40:25.311949 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:40:25.311992 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:25.313011 systemd[1]: Starting systemd-boot-update.service... Sep 10 00:40:25.314961 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 10 00:40:25.317488 systemd[1]: Starting systemd-machine-id-commit.service... Sep 10 00:40:25.319760 systemd[1]: Starting systemd-sysext.service... Sep 10 00:40:25.321143 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Sep 10 00:40:25.322198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 10 00:40:25.325861 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 10 00:40:25.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.331162 systemd[1]: Unmounting usr-share-oem.mount... Sep 10 00:40:25.333002 kernel: audit: type=1130 audit(1757464825.326:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.335624 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 10 00:40:25.335828 systemd[1]: Unmounted usr-share-oem.mount. Sep 10 00:40:25.350936 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:40:25.366643 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Sep 10 00:40:25.366643 systemd-fsck[1121]: /dev/vda1: 790 files, 120765/258078 clusters Sep 10 00:40:25.368513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 10 00:40:25.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.372510 systemd[1]: Mounting boot.mount... Sep 10 00:40:25.377926 kernel: audit: type=1130 audit(1757464825.370:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.581242 systemd[1]: Mounted boot.mount. Sep 10 00:40:25.649937 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:40:25.650568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:40:25.651349 systemd[1]: Finished systemd-boot-update.service. Sep 10 00:40:25.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.653032 systemd[1]: Finished systemd-machine-id-commit.service. Sep 10 00:40:25.656259 kernel: audit: type=1130 audit(1757464825.652:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.660961 kernel: audit: type=1130 audit(1757464825.657:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.668330 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:40:25.670930 kernel: loop1: detected capacity change from 0 to 221472 Sep 10 00:40:25.674217 systemd[1]: Finished ldconfig.service. Sep 10 00:40:25.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.678925 kernel: audit: type=1130 audit(1757464825.675:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.679527 (sd-sysext)[1131]: Using extensions 'kubernetes'. Sep 10 00:40:25.679871 (sd-sysext)[1131]: Merged extensions into '/usr'. Sep 10 00:40:25.700423 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:25.702721 systemd[1]: Mounting usr-share-oem.mount... Sep 10 00:40:25.703936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:40:25.705826 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:40:25.708355 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:40:25.710479 systemd[1]: Starting modprobe@loop.service... Sep 10 00:40:25.711307 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:40:25.711553 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:25.711717 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:25.714455 systemd[1]: Mounted usr-share-oem.mount. Sep 10 00:40:25.715581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:40:25.715736 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:40:25.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.716926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:40:25.717053 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:40:25.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.719942 kernel: audit: type=1130 audit(1757464825.715:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.719976 kernel: audit: type=1131 audit(1757464825.715:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.724200 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:40:25.724390 systemd[1]: Finished modprobe@loop.service. Sep 10 00:40:25.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.727922 kernel: audit: type=1130 audit(1757464825.723:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.727961 kernel: audit: type=1131 audit(1757464825.723:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.732885 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:40:25.733042 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:40:25.734253 systemd[1]: Finished systemd-sysext.service. Sep 10 00:40:25.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:25.737222 systemd[1]: Starting ensure-sysext.service... Sep 10 00:40:25.739459 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 10 00:40:25.743322 systemd[1]: Reloading. Sep 10 00:40:25.749604 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 10 00:40:25.750765 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:40:25.752383 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:40:26.183721 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-10T00:40:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:40:26.183758 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-09-10T00:40:26Z" level=info msg="torcx already run" Sep 10 00:40:26.218759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:40:26.218777 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:40:26.238409 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:40:26.296789 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 10 00:40:26.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:26.301046 systemd[1]: Starting audit-rules.service... Sep 10 00:40:26.303207 systemd[1]: Starting clean-ca-certificates.service... Sep 10 00:40:26.305293 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 10 00:40:26.309185 systemd[1]: Starting systemd-resolved.service... Sep 10 00:40:26.311782 systemd[1]: Starting systemd-timesyncd.service... Sep 10 00:40:26.314141 systemd[1]: Starting systemd-update-utmp.service... Sep 10 00:40:26.316002 systemd[1]: Finished clean-ca-certificates.service. Sep 10 00:40:26.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:26.319000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 10 00:40:26.323739 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.324000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 10 00:40:26.324000 audit[1236]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9b949e20 a2=420 a3=0 items=0 ppid=1215 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:26.324000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 10 00:40:26.325696 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:40:26.326706 augenrules[1236]: No rules Sep 10 00:40:26.328393 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:40:26.330643 systemd[1]: Starting modprobe@loop.service... Sep 10 00:40:26.331727 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.332055 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:26.332218 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:40:26.333883 systemd[1]: Finished audit-rules.service. Sep 10 00:40:26.338667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:40:26.338993 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:40:26.340495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:40:26.340854 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:40:26.342536 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 10 00:40:26.344210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:40:26.344355 systemd[1]: Finished modprobe@loop.service. Sep 10 00:40:26.345742 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.346557 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:40:26.346666 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.348342 systemd[1]: Starting systemd-update-done.service... Sep 10 00:40:26.349488 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.350286 systemd[1]: Finished systemd-update-utmp.service. Sep 10 00:40:26.353309 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.353755 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.355071 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:40:26.356808 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:40:26.358766 systemd[1]: Starting modprobe@loop.service... Sep 10 00:40:26.359630 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.359729 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:26.359811 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:40:26.359872 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.360784 systemd[1]: Finished systemd-update-done.service. Sep 10 00:40:26.362248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:40:26.362384 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:40:26.363650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:40:26.363782 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:40:26.365043 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:40:26.365240 systemd[1]: Finished modprobe@loop.service. Sep 10 00:40:26.366375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:40:26.366485 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.369162 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.369376 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.370866 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:40:26.372960 systemd[1]: Starting modprobe@drm.service... Sep 10 00:40:26.375172 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:40:26.377398 systemd[1]: Starting modprobe@loop.service... Sep 10 00:40:26.378366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.378506 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:26.380219 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 10 00:40:26.381491 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:40:26.381742 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:40:26.383298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:40:26.383489 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:40:26.384976 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:40:26.385116 systemd[1]: Finished modprobe@drm.service. Sep 10 00:40:26.386367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:40:26.386483 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:40:26.387780 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:40:26.387942 systemd[1]: Finished modprobe@loop.service. Sep 10 00:40:26.389364 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:40:26.389485 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:40:26.390650 systemd[1]: Finished ensure-sysext.service. Sep 10 00:40:26.394648 systemd-resolved[1222]: Positive Trust Anchors: Sep 10 00:40:26.395008 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:40:26.395042 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:40:26.397630 systemd[1]: Started systemd-timesyncd.service. Sep 10 00:40:27.813805 systemd-timesyncd[1226]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:40:27.813840 systemd[1]: Reached target time-set.target. Sep 10 00:40:27.813876 systemd-timesyncd[1226]: Initial clock synchronization to Wed 2025-09-10 00:40:27.813685 UTC. Sep 10 00:40:27.817694 systemd-resolved[1222]: Defaulting to hostname 'linux'. Sep 10 00:40:27.819400 systemd[1]: Started systemd-resolved.service. Sep 10 00:40:27.819524 systemd-networkd[1075]: eth0: Gained IPv6LL Sep 10 00:40:27.820470 systemd[1]: Reached target network.target. Sep 10 00:40:27.821256 systemd[1]: Reached target nss-lookup.target. Sep 10 00:40:27.822108 systemd[1]: Reached target sysinit.target. Sep 10 00:40:27.822967 systemd[1]: Started motdgen.path. Sep 10 00:40:27.823842 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 10 00:40:27.825070 systemd[1]: Started logrotate.timer. Sep 10 00:40:27.825901 systemd[1]: Started mdadm.timer. Sep 10 00:40:27.826636 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 10 00:40:27.827561 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:40:27.827629 systemd[1]: Reached target paths.target. Sep 10 00:40:27.828421 systemd[1]: Reached target timers.target. Sep 10 00:40:27.829488 systemd[1]: Listening on dbus.socket. Sep 10 00:40:27.831350 systemd[1]: Starting docker.socket... Sep 10 00:40:27.832986 systemd[1]: Listening on sshd.socket. Sep 10 00:40:27.833921 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:27.834387 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 10 00:40:27.835495 systemd[1]: Listening on docker.socket. Sep 10 00:40:27.836369 systemd[1]: Reached target network-online.target. Sep 10 00:40:27.837264 systemd[1]: Reached target sockets.target. Sep 10 00:40:27.838084 systemd[1]: Reached target basic.target. Sep 10 00:40:27.839012 systemd[1]: System is tainted: cgroupsv1 Sep 10 00:40:27.839054 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:40:27.839071 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:40:27.839898 systemd[1]: Starting containerd.service... Sep 10 00:40:27.841629 systemd[1]: Starting dbus.service... Sep 10 00:40:27.843408 systemd[1]: Starting enable-oem-cloudinit.service... Sep 10 00:40:27.845489 systemd[1]: Starting extend-filesystems.service... Sep 10 00:40:27.846624 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 10 00:40:27.849701 jq[1279]: false Sep 10 00:40:27.847644 systemd[1]: Starting kubelet.service... Sep 10 00:40:27.849602 systemd[1]: Starting motdgen.service... Sep 10 00:40:27.851647 systemd[1]: Starting prepare-helm.service... Sep 10 00:40:27.853551 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 10 00:40:27.855668 systemd[1]: Starting sshd-keygen.service... Sep 10 00:40:27.859314 systemd[1]: Starting systemd-logind.service... Sep 10 00:40:27.860278 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:40:27.860430 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:40:27.871797 systemd[1]: Starting update-engine.service... Sep 10 00:40:27.874354 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 10 00:40:27.878108 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:40:27.885282 jq[1303]: true Sep 10 00:40:27.878401 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 10 00:40:27.880590 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:40:27.880842 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 10 00:40:27.885839 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:40:27.886072 systemd[1]: Finished motdgen.service. Sep 10 00:40:27.891661 jq[1308]: true Sep 10 00:40:27.893056 tar[1306]: linux-amd64/helm Sep 10 00:40:27.906475 dbus-daemon[1278]: [system] SELinux support is enabled Sep 10 00:40:27.906648 systemd[1]: Started dbus.service. Sep 10 00:40:27.909139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:40:27.909166 systemd[1]: Reached target system-config.target. Sep 10 00:40:27.910344 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:40:27.910367 systemd[1]: Reached target user-config.target. Sep 10 00:40:27.926819 update_engine[1301]: I0910 00:40:27.926584 1301 main.cc:92] Flatcar Update Engine starting Sep 10 00:40:27.929146 systemd[1]: Started update-engine.service. Sep 10 00:40:27.942322 update_engine[1301]: I0910 00:40:27.929225 1301 update_check_scheduler.cc:74] Next update check in 6m58s Sep 10 00:40:27.931853 systemd[1]: Started locksmithd.service. Sep 10 00:40:28.068620 extend-filesystems[1280]: Found loop1 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found sr0 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda1 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda2 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda3 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found usr Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda4 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda6 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda7 Sep 10 00:40:28.068620 extend-filesystems[1280]: Found vda9 Sep 10 00:40:28.068620 extend-filesystems[1280]: Checking size of /dev/vda9 Sep 10 00:40:28.087167 extend-filesystems[1280]: Resized partition /dev/vda9 Sep 10 00:40:28.091507 extend-filesystems[1339]: resize2fs 1.46.5 (30-Dec-2021) Sep 10 00:40:28.099545 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:40:28.102850 systemd-logind[1294]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:40:28.102877 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:40:28.104240 systemd-logind[1294]: New seat seat0. Sep 10 00:40:28.107735 systemd[1]: Started systemd-logind.service. Sep 10 00:40:28.146369 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:40:28.227344 extend-filesystems[1339]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:40:28.227344 extend-filesystems[1339]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:40:28.227344 extend-filesystems[1339]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:40:28.236402 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Sep 10 00:40:28.237431 env[1311]: time="2025-09-10T00:40:28.232102359Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 10 00:40:28.228782 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:40:28.229047 systemd[1]: Finished extend-filesystems.service. Sep 10 00:40:28.237793 bash[1340]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:40:28.238580 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 10 00:40:28.263856 env[1311]: time="2025-09-10T00:40:28.263789813Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:40:28.264213 env[1311]: time="2025-09-10T00:40:28.264193300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.268645 env[1311]: time="2025-09-10T00:40:28.268613988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:40:28.268735 env[1311]: time="2025-09-10T00:40:28.268714587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.269129 env[1311]: time="2025-09-10T00:40:28.269106843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:40:28.269212 env[1311]: time="2025-09-10T00:40:28.269191822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.269301 env[1311]: time="2025-09-10T00:40:28.269279747Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 10 00:40:28.269404 env[1311]: time="2025-09-10T00:40:28.269383702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.269581 env[1311]: time="2025-09-10T00:40:28.269562647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.269932 env[1311]: time="2025-09-10T00:40:28.269912503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:40:28.270150 env[1311]: time="2025-09-10T00:40:28.270130392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:40:28.270231 env[1311]: time="2025-09-10T00:40:28.270211243Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:40:28.270379 env[1311]: time="2025-09-10T00:40:28.270359471Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 10 00:40:28.270477 env[1311]: time="2025-09-10T00:40:28.270455111Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334170797Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334262660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334290051Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334389147Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334506677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334529480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334541172Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334601235Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334680623Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334717342Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334757698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.334777345Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.335010833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:40:28.336312 env[1311]: time="2025-09-10T00:40:28.335170702Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:40:28.337011 env[1311]: time="2025-09-10T00:40:28.336172340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.336243945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337086715Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337208193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337369194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337399171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337412155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337442352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337469332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337482156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337515639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337530637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:40:28.337744 env[1311]: time="2025-09-10T00:40:28.337711777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.337727436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.338042266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.338056623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.338074136Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.338103010Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:40:28.338170 env[1311]: time="2025-09-10T00:40:28.338133117Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 10 00:40:28.338819 env[1311]: time="2025-09-10T00:40:28.338565007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:40:28.339038 env[1311]: time="2025-09-10T00:40:28.338968233Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:40:28.340043 env[1311]: time="2025-09-10T00:40:28.339227669Z" level=info msg="Connect containerd service" Sep 10 00:40:28.340043 env[1311]: time="2025-09-10T00:40:28.339661152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:40:28.341113 env[1311]: time="2025-09-10T00:40:28.341048072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:40:28.345072 env[1311]: time="2025-09-10T00:40:28.345024148Z" level=info msg="Start subscribing containerd event" Sep 10 00:40:28.345224 env[1311]: time="2025-09-10T00:40:28.345196511Z" level=info msg="Start recovering state" Sep 10 00:40:28.345426 env[1311]: time="2025-09-10T00:40:28.345396936Z" level=info msg="Start event monitor" Sep 10 00:40:28.346231 env[1311]: time="2025-09-10T00:40:28.346195243Z" level=info msg="Start snapshots syncer" Sep 10 00:40:28.346357 env[1311]: time="2025-09-10T00:40:28.346315709Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:40:28.347507 env[1311]: time="2025-09-10T00:40:28.347480132Z" level=info msg="Start streaming server" Sep 10 00:40:28.347830 env[1311]: time="2025-09-10T00:40:28.346497951Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:40:28.347995 env[1311]: time="2025-09-10T00:40:28.347964931Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:40:28.348468 systemd[1]: Started containerd.service. Sep 10 00:40:28.353448 env[1311]: time="2025-09-10T00:40:28.353374384Z" level=info msg="containerd successfully booted in 0.137697s" Sep 10 00:40:28.364219 locksmithd[1332]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:40:28.566045 tar[1306]: linux-amd64/LICENSE Sep 10 00:40:28.566404 tar[1306]: linux-amd64/README.md Sep 10 00:40:28.571545 systemd[1]: Finished prepare-helm.service. Sep 10 00:40:28.853204 sshd_keygen[1302]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:40:28.874452 systemd[1]: Finished sshd-keygen.service. Sep 10 00:40:28.876795 systemd[1]: Starting issuegen.service... Sep 10 00:40:28.881699 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:40:28.881987 systemd[1]: Finished issuegen.service. Sep 10 00:40:28.884538 systemd[1]: Starting systemd-user-sessions.service... Sep 10 00:40:28.890628 systemd[1]: Finished systemd-user-sessions.service. Sep 10 00:40:28.893708 systemd[1]: Started getty@tty1.service. Sep 10 00:40:28.896173 systemd[1]: Started serial-getty@ttyS0.service. Sep 10 00:40:28.897399 systemd[1]: Reached target getty.target. Sep 10 00:40:29.113118 systemd[1]: Created slice system-sshd.slice. Sep 10 00:40:29.115365 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:48422.service. Sep 10 00:40:29.153368 sshd[1378]: Accepted publickey for core from 10.0.0.1 port 48422 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:29.154739 sshd[1378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:29.166281 systemd-logind[1294]: New session 1 of user core. Sep 10 00:40:29.167216 systemd[1]: Created slice user-500.slice. Sep 10 00:40:29.174010 systemd[1]: Starting user-runtime-dir@500.service... Sep 10 00:40:29.189146 systemd[1]: Finished user-runtime-dir@500.service. Sep 10 00:40:29.192041 systemd[1]: Starting user@500.service... Sep 10 00:40:29.195321 (systemd)[1383]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:29.265664 systemd[1383]: Queued start job for default target default.target. Sep 10 00:40:29.266417 systemd[1383]: Reached target paths.target. Sep 10 00:40:29.266450 systemd[1383]: Reached target sockets.target. Sep 10 00:40:29.266462 systemd[1383]: Reached target timers.target. Sep 10 00:40:29.266481 systemd[1383]: Reached target basic.target. Sep 10 00:40:29.266521 systemd[1383]: Reached target default.target. Sep 10 00:40:29.266542 systemd[1383]: Startup finished in 65ms. Sep 10 00:40:29.266785 systemd[1]: Started user@500.service. Sep 10 00:40:29.269531 systemd[1]: Started session-1.scope. Sep 10 00:40:29.321904 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:48426.service. Sep 10 00:40:29.355772 sshd[1392]: Accepted publickey for core from 10.0.0.1 port 48426 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:29.359615 sshd[1392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:29.363957 systemd-logind[1294]: New session 2 of user core. Sep 10 00:40:29.364729 systemd[1]: Started session-2.scope. Sep 10 00:40:29.472372 sshd[1392]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:29.474851 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:48440.service. Sep 10 00:40:29.477764 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:48426.service: Deactivated successfully. Sep 10 00:40:29.479886 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:40:29.481654 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:40:29.484733 systemd-logind[1294]: Removed session 2. Sep 10 00:40:29.486174 systemd[1]: Started kubelet.service. Sep 10 00:40:29.487758 systemd[1]: Reached target multi-user.target. Sep 10 00:40:29.490195 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 10 00:40:29.519358 sshd[1397]: Accepted publickey for core from 10.0.0.1 port 48440 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:29.522926 sshd[1397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:29.535755 systemd[1]: Started session-3.scope. Sep 10 00:40:29.537028 systemd-logind[1294]: New session 3 of user core. Sep 10 00:40:29.596651 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 10 00:40:29.597008 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 10 00:40:29.600915 systemd[1]: Startup finished in 6.529s (kernel) + 7.607s (userspace) = 14.137s. Sep 10 00:40:29.664932 sshd[1397]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:29.668053 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:48440.service: Deactivated successfully. Sep 10 00:40:29.669287 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:40:29.669913 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:40:29.670780 systemd-logind[1294]: Removed session 3. Sep 10 00:40:30.175294 kubelet[1405]: E0910 00:40:30.175234 1405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:30.176914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:30.177066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:39.668057 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:36524.service. Sep 10 00:40:39.698861 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 36524 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:39.699861 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:39.703356 systemd-logind[1294]: New session 4 of user core. Sep 10 00:40:39.704156 systemd[1]: Started session-4.scope. Sep 10 00:40:39.757579 sshd[1421]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:39.759785 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:36526.service. Sep 10 00:40:39.760221 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:36524.service: Deactivated successfully. Sep 10 00:40:39.761112 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:40:39.761211 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:40:39.762079 systemd-logind[1294]: Removed session 4. Sep 10 00:40:39.790180 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:39.791060 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:39.794129 systemd-logind[1294]: New session 5 of user core. Sep 10 00:40:39.795010 systemd[1]: Started session-5.scope. Sep 10 00:40:39.845487 sshd[1427]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:39.847802 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:36538.service. Sep 10 00:40:39.848474 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:36526.service: Deactivated successfully. Sep 10 00:40:39.849424 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:40:39.849453 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:40:39.850484 systemd-logind[1294]: Removed session 5. Sep 10 00:40:39.877281 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 36538 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:39.878397 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:39.881599 systemd-logind[1294]: New session 6 of user core. Sep 10 00:40:39.882525 systemd[1]: Started session-6.scope. Sep 10 00:40:39.934538 sshd[1434]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:39.936698 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:51258.service. Sep 10 00:40:39.937131 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:36538.service: Deactivated successfully. Sep 10 00:40:39.938078 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:40:39.938095 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:40:39.938953 systemd-logind[1294]: Removed session 6. Sep 10 00:40:39.967770 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 51258 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:39.968694 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:39.971827 systemd-logind[1294]: New session 7 of user core. Sep 10 00:40:39.972572 systemd[1]: Started session-7.scope. Sep 10 00:40:40.027139 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:40:40.027347 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:40:40.036555 dbus-daemon[1278]: \xd0\u000d+6\xcfU: received setenforce notice (enforcing=515286384) Sep 10 00:40:40.038410 sudo[1446]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:40.039841 sshd[1440]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:40.042800 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:51262.service. Sep 10 00:40:40.043422 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:51258.service: Deactivated successfully. Sep 10 00:40:40.044665 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:40:40.044717 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:40:40.045869 systemd-logind[1294]: Removed session 7. Sep 10 00:40:40.072532 sshd[1449]: Accepted publickey for core from 10.0.0.1 port 51262 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:40.073521 sshd[1449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:40.076774 systemd-logind[1294]: New session 8 of user core. Sep 10 00:40:40.077717 systemd[1]: Started session-8.scope. Sep 10 00:40:40.130874 sudo[1455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:40:40.131071 sudo[1455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:40:40.133767 sudo[1455]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:40.138390 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:40:40.138655 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:40:40.147147 systemd[1]: Stopping audit-rules.service... Sep 10 00:40:40.146000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 10 00:40:40.148289 auditctl[1458]: No rules Sep 10 00:40:40.148573 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:40:40.148738 systemd[1]: Stopped audit-rules.service. Sep 10 00:40:40.149140 kernel: kauditd_printk_skb: 9 callbacks suppressed Sep 10 00:40:40.149204 kernel: audit: type=1305 audit(1757464840.146:136): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 10 00:40:40.149938 systemd[1]: Starting audit-rules.service... Sep 10 00:40:40.146000 audit[1458]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddb2b5310 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:40.155038 kernel: audit: type=1300 audit(1757464840.146:136): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddb2b5310 a2=420 a3=0 items=0 ppid=1 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:40.155087 kernel: audit: type=1327 audit(1757464840.146:136): proctitle=2F7362696E2F617564697463746C002D44 Sep 10 00:40:40.146000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 10 00:40:40.156307 kernel: audit: type=1131 audit(1757464840.146:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.169232 augenrules[1476]: No rules Sep 10 00:40:40.170048 systemd[1]: Finished audit-rules.service. Sep 10 00:40:40.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.171447 sudo[1454]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:40.169000 audit[1454]: USER_END pid=1454 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.175124 sshd[1449]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:40.176941 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:51262.service: Deactivated successfully. Sep 10 00:40:40.178000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:40:40.182925 kernel: audit: type=1130 audit(1757464840.169:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.182979 kernel: audit: type=1106 audit(1757464840.169:139): pid=1454 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.183013 kernel: audit: type=1104 audit(1757464840.169:140): pid=1454 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.183038 kernel: audit: type=1106 audit(1757464840.174:141): pid=1449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.169000 audit[1454]: CRED_DISP pid=1454 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.174000 audit[1449]: USER_END pid=1449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.178522 systemd[1]: Stopped kubelet.service. Sep 10 00:40:40.178701 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:40:40.183654 systemd[1]: Starting kubelet.service... Sep 10 00:40:40.183934 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:40:40.190511 kernel: audit: type=1104 audit(1757464840.174:142): pid=1449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.174000 audit[1449]: CRED_DISP pid=1449 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.187940 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:51266.service. Sep 10 00:40:40.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.41:22-10.0.0.1:51262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.191946 systemd-logind[1294]: Removed session 8. Sep 10 00:40:40.194522 kernel: audit: type=1131 audit(1757464840.174:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.41:22-10.0.0.1:51262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:51266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.218000 audit[1486]: USER_ACCT pid=1486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.220275 sshd[1486]: Accepted publickey for core from 10.0.0.1 port 51266 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:40:40.219000 audit[1486]: CRED_ACQ pid=1486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.219000 audit[1486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe7695740 a2=3 a3=0 items=0 ppid=1 pid=1486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:40.219000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:40:40.221340 sshd[1486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:40:40.224918 systemd-logind[1294]: New session 9 of user core. Sep 10 00:40:40.225871 systemd[1]: Started session-9.scope. Sep 10 00:40:40.228000 audit[1486]: USER_START pid=1486 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.230000 audit[1489]: CRED_ACQ pid=1489 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:40:40.277000 audit[1490]: USER_ACCT pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.278000 audit[1490]: CRED_REFR pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.279391 sudo[1490]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:40:40.279583 sudo[1490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:40:40.280000 audit[1490]: USER_START pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.307866 systemd[1]: Starting docker.service... Sep 10 00:40:40.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:40.336521 systemd[1]: Started kubelet.service. Sep 10 00:40:40.361435 env[1501]: time="2025-09-10T00:40:40.361321662Z" level=info msg="Starting up" Sep 10 00:40:40.363304 env[1501]: time="2025-09-10T00:40:40.363272270Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:40:40.364403 env[1501]: time="2025-09-10T00:40:40.364371651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:40:40.364473 env[1501]: time="2025-09-10T00:40:40.364417908Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:40:40.364473 env[1501]: time="2025-09-10T00:40:40.364436402Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:40:40.366969 env[1501]: time="2025-09-10T00:40:40.366947260Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:40:40.366969 env[1501]: time="2025-09-10T00:40:40.366964182Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:40:40.366969 env[1501]: time="2025-09-10T00:40:40.366976896Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:40:40.366969 env[1501]: time="2025-09-10T00:40:40.366985342Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:40:40.391669 kubelet[1515]: E0910 00:40:40.391617 1515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:40.394765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:40.394958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:40.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 10 00:40:40.957010 env[1501]: time="2025-09-10T00:40:40.956944946Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 10 00:40:40.957010 env[1501]: time="2025-09-10T00:40:40.956977537Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 10 00:40:40.957312 env[1501]: time="2025-09-10T00:40:40.957249587Z" level=info msg="Loading containers: start." Sep 10 00:40:41.014000 audit[1548]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.014000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd3da5ba80 a2=0 a3=7ffd3da5ba6c items=0 ppid=1501 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.014000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 10 00:40:41.016000 audit[1550]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.016000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcae23d750 a2=0 a3=7ffcae23d73c items=0 ppid=1501 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.016000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 10 00:40:41.019000 audit[1552]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.019000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7d75e780 a2=0 a3=7ffd7d75e76c items=0 ppid=1501 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 10 00:40:41.022000 audit[1554]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.022000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffecf1d7c70 a2=0 a3=7ffecf1d7c5c items=0 ppid=1501 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.022000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 10 00:40:41.025000 audit[1556]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.025000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd037aac90 a2=0 a3=7ffd037aac7c items=0 ppid=1501 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.025000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 10 00:40:41.051000 audit[1561]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.051000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeea8d7f30 a2=0 a3=7ffeea8d7f1c items=0 ppid=1501 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.051000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 10 00:40:41.061000 audit[1563]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.061000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcefc37210 a2=0 a3=7ffcefc371fc items=0 ppid=1501 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.061000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 10 00:40:41.064000 audit[1565]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.064000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc7bbcde60 a2=0 a3=7ffc7bbcde4c items=0 ppid=1501 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.064000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 10 00:40:41.067000 audit[1567]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.067000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff91594190 a2=0 a3=7fff9159417c items=0 ppid=1501 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.067000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 10 00:40:41.080000 audit[1571]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.080000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff73de3150 a2=0 a3=7fff73de313c items=0 ppid=1501 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.080000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 10 00:40:41.089000 audit[1572]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.089000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd6315aa60 a2=0 a3=7ffd6315aa4c items=0 ppid=1501 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 10 00:40:41.104346 kernel: Initializing XFRM netlink socket Sep 10 00:40:41.143806 env[1501]: time="2025-09-10T00:40:41.143734857Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 10 00:40:41.164000 audit[1580]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.164000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc407bf3c0 a2=0 a3=7ffc407bf3ac items=0 ppid=1501 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.164000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 10 00:40:41.175000 audit[1583]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.175000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd1db14aa0 a2=0 a3=7ffd1db14a8c items=0 ppid=1501 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.175000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 10 00:40:41.178000 audit[1586]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.178000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe1353d950 a2=0 a3=7ffe1353d93c items=0 ppid=1501 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.178000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 10 00:40:41.181000 audit[1588]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.181000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe639ac760 a2=0 a3=7ffe639ac74c items=0 ppid=1501 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.181000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 10 00:40:41.183000 audit[1590]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.183000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffebe29e620 a2=0 a3=7ffebe29e60c items=0 ppid=1501 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.183000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 10 00:40:41.186000 audit[1592]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.186000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe2ddbe1e0 a2=0 a3=7ffe2ddbe1cc items=0 ppid=1501 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.186000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 10 00:40:41.188000 audit[1594]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.188000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffadeee8a0 a2=0 a3=7fffadeee88c items=0 ppid=1501 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.188000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 10 00:40:41.195000 audit[1597]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.195000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff394745f0 a2=0 a3=7fff394745dc items=0 ppid=1501 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.195000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 10 00:40:41.198000 audit[1599]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.198000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd12e680d0 a2=0 a3=7ffd12e680bc items=0 ppid=1501 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.198000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 10 00:40:41.201000 audit[1601]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.201000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff21561200 a2=0 a3=7fff215611ec items=0 ppid=1501 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 10 00:40:41.203000 audit[1603]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.203000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe9f285300 a2=0 a3=7ffe9f2852ec items=0 ppid=1501 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.203000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 10 00:40:41.205196 systemd-networkd[1075]: docker0: Link UP Sep 10 00:40:41.217000 audit[1607]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.217000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc464fce80 a2=0 a3=7ffc464fce6c items=0 ppid=1501 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.217000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 10 00:40:41.222000 audit[1608]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:40:41.222000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce07e79f0 a2=0 a3=7ffce07e79dc items=0 ppid=1501 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:40:41.222000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 10 00:40:41.223737 env[1501]: time="2025-09-10T00:40:41.223698649Z" level=info msg="Loading containers: done." Sep 10 00:40:42.500020 env[1501]: time="2025-09-10T00:40:42.499831828Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:40:42.501423 env[1501]: time="2025-09-10T00:40:42.500529667Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 10 00:40:42.501423 env[1501]: time="2025-09-10T00:40:42.500735903Z" level=info msg="Daemon has completed initialization" Sep 10 00:40:42.540056 systemd[1]: Started docker.service. Sep 10 00:40:42.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:42.549321 env[1501]: time="2025-09-10T00:40:42.549199262Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:40:44.026865 env[1311]: time="2025-09-10T00:40:44.026708011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:40:46.487239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934904200.mount: Deactivated successfully. Sep 10 00:40:50.237825 env[1311]: time="2025-09-10T00:40:50.237747207Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:50.239851 env[1311]: time="2025-09-10T00:40:50.239781011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:50.241967 env[1311]: time="2025-09-10T00:40:50.241912197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:50.243949 env[1311]: time="2025-09-10T00:40:50.243878494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:50.244529 env[1311]: time="2025-09-10T00:40:50.244496743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:40:50.245237 env[1311]: time="2025-09-10T00:40:50.245209389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:40:50.646106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:40:50.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.646398 systemd[1]: Stopped kubelet.service. Sep 10 00:40:50.647822 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 10 00:40:50.647961 kernel: audit: type=1130 audit(1757464850.645:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.650257 systemd[1]: Starting kubelet.service... Sep 10 00:40:50.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.654370 kernel: audit: type=1131 audit(1757464850.645:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.870542 systemd[1]: Started kubelet.service. Sep 10 00:40:50.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.874362 kernel: audit: type=1130 audit(1757464850.869:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:40:50.959304 kubelet[1653]: E0910 00:40:50.958894 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:50.960903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:50.961064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:50.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 10 00:40:50.972381 kernel: audit: type=1131 audit(1757464850.960:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 10 00:40:53.935777 env[1311]: time="2025-09-10T00:40:53.935672547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:53.942472 env[1311]: time="2025-09-10T00:40:53.941695910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:53.952017 env[1311]: time="2025-09-10T00:40:53.951914901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:53.955963 env[1311]: time="2025-09-10T00:40:53.955883562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:53.957083 env[1311]: time="2025-09-10T00:40:53.957005155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:40:53.959112 env[1311]: time="2025-09-10T00:40:53.959026605Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:40:56.549682 env[1311]: time="2025-09-10T00:40:56.549594710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:56.552615 env[1311]: time="2025-09-10T00:40:56.552577362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:56.554419 env[1311]: time="2025-09-10T00:40:56.554372268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:56.556299 env[1311]: time="2025-09-10T00:40:56.556255589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:56.557600 env[1311]: time="2025-09-10T00:40:56.557555045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:40:56.558372 env[1311]: time="2025-09-10T00:40:56.558319178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:40:58.245543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288111893.mount: Deactivated successfully. Sep 10 00:40:59.765904 env[1311]: time="2025-09-10T00:40:59.765831643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:59.767865 env[1311]: time="2025-09-10T00:40:59.767823268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:59.769700 env[1311]: time="2025-09-10T00:40:59.769655112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:59.771502 env[1311]: time="2025-09-10T00:40:59.771459876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:40:59.772047 env[1311]: time="2025-09-10T00:40:59.772000360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:40:59.772758 env[1311]: time="2025-09-10T00:40:59.772726411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:41:00.403715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090630272.mount: Deactivated successfully. Sep 10 00:41:01.211981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:41:01.212165 systemd[1]: Stopped kubelet.service. Sep 10 00:41:01.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.213724 systemd[1]: Starting kubelet.service... Sep 10 00:41:01.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.221490 kernel: audit: type=1130 audit(1757464861.210:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.221564 kernel: audit: type=1131 audit(1757464861.210:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.406719 systemd[1]: Started kubelet.service. Sep 10 00:41:01.411383 kernel: audit: type=1130 audit(1757464861.405:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:01.626055 kubelet[1669]: E0910 00:41:01.625847 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:41:01.628481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:41:01.628640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:41:01.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 10 00:41:01.633440 kernel: audit: type=1131 audit(1757464861.627:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 10 00:41:02.378011 env[1311]: time="2025-09-10T00:41:02.377937651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.381359 env[1311]: time="2025-09-10T00:41:02.381250284Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.383669 env[1311]: time="2025-09-10T00:41:02.383612270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.385947 env[1311]: time="2025-09-10T00:41:02.385894193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.386796 env[1311]: time="2025-09-10T00:41:02.386755086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:41:02.387581 env[1311]: time="2025-09-10T00:41:02.387536697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:41:02.920590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878019524.mount: Deactivated successfully. Sep 10 00:41:02.927770 env[1311]: time="2025-09-10T00:41:02.927680678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.930268 env[1311]: time="2025-09-10T00:41:02.930211709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.932341 env[1311]: time="2025-09-10T00:41:02.932265574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.934062 env[1311]: time="2025-09-10T00:41:02.934006978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:02.934716 env[1311]: time="2025-09-10T00:41:02.934637139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:41:02.935210 env[1311]: time="2025-09-10T00:41:02.935165674Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:41:03.857933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359531056.mount: Deactivated successfully. Sep 10 00:41:06.662432 env[1311]: time="2025-09-10T00:41:06.662354063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:06.664567 env[1311]: time="2025-09-10T00:41:06.664503980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:06.666734 env[1311]: time="2025-09-10T00:41:06.666660679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:06.668645 env[1311]: time="2025-09-10T00:41:06.668601216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:06.669512 env[1311]: time="2025-09-10T00:41:06.669476838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:41:08.763579 systemd[1]: Stopped kubelet.service. Sep 10 00:41:08.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:08.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:08.767453 systemd[1]: Starting kubelet.service... Sep 10 00:41:08.770266 kernel: audit: type=1130 audit(1757464868.763:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:08.770440 kernel: audit: type=1131 audit(1757464868.763:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:08.788843 systemd[1]: Reloading. Sep 10 00:41:08.854373 /usr/lib/systemd/system-generators/torcx-generator[1726]: time="2025-09-10T00:41:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:41:08.854960 /usr/lib/systemd/system-generators/torcx-generator[1726]: time="2025-09-10T00:41:08Z" level=info msg="torcx already run" Sep 10 00:41:09.116605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:41:09.116625 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:41:09.137820 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:41:09.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.213105 systemd[1]: Started kubelet.service. Sep 10 00:41:09.217385 kernel: audit: type=1130 audit(1757464869.213:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.218268 systemd[1]: Stopping kubelet.service... Sep 10 00:41:09.219463 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:41:09.219713 systemd[1]: Stopped kubelet.service. Sep 10 00:41:09.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.222140 systemd[1]: Starting kubelet.service... Sep 10 00:41:09.224360 kernel: audit: type=1131 audit(1757464869.219:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.315926 systemd[1]: Started kubelet.service. Sep 10 00:41:09.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.323377 kernel: audit: type=1130 audit(1757464869.317:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:09.356437 kubelet[1792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:41:09.356437 kubelet[1792]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:41:09.356437 kubelet[1792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:41:09.356936 kubelet[1792]: I0910 00:41:09.356479 1792 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:41:09.620230 kubelet[1792]: I0910 00:41:09.620170 1792 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:41:09.620230 kubelet[1792]: I0910 00:41:09.620206 1792 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:41:09.620571 kubelet[1792]: I0910 00:41:09.620543 1792 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:41:09.635509 kubelet[1792]: E0910 00:41:09.635467 1792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:09.636386 kubelet[1792]: I0910 00:41:09.636367 1792 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:41:09.646971 kubelet[1792]: E0910 00:41:09.646925 1792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:41:09.646971 kubelet[1792]: I0910 00:41:09.646960 1792 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:41:09.652346 kubelet[1792]: I0910 00:41:09.652300 1792 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:41:09.652584 kubelet[1792]: I0910 00:41:09.652567 1792 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:41:09.652704 kubelet[1792]: I0910 00:41:09.652682 1792 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:41:09.652906 kubelet[1792]: I0910 00:41:09.652702 1792 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:41:09.653023 kubelet[1792]: I0910 00:41:09.652925 1792 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:41:09.653023 kubelet[1792]: I0910 00:41:09.652935 1792 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:41:09.653075 kubelet[1792]: I0910 00:41:09.653068 1792 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:41:09.659162 kubelet[1792]: I0910 00:41:09.659134 1792 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:41:09.659162 kubelet[1792]: I0910 00:41:09.659164 1792 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:41:09.659281 kubelet[1792]: I0910 00:41:09.659197 1792 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:41:09.659281 kubelet[1792]: I0910 00:41:09.659218 1792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:41:09.800600 kubelet[1792]: I0910 00:41:09.800539 1792 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:41:09.801170 kubelet[1792]: I0910 00:41:09.801122 1792 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:41:09.801170 kubelet[1792]: W0910 00:41:09.801184 1792 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:41:09.802940 kubelet[1792]: W0910 00:41:09.802875 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:09.803028 kubelet[1792]: E0910 00:41:09.802959 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:09.807674 kubelet[1792]: I0910 00:41:09.807654 1792 server.go:1274] "Started kubelet" Sep 10 00:41:09.808000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:09.809123 kubelet[1792]: I0910 00:41:09.808934 1792 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 10 00:41:09.809123 kubelet[1792]: I0910 00:41:09.808979 1792 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 10 00:41:09.809123 kubelet[1792]: I0910 00:41:09.809044 1792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:41:09.821610 kernel: audit: type=1400 audit(1757464869.808:195): avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:09.821794 kernel: audit: type=1401 audit(1757464869.808:195): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:09.821848 kernel: audit: type=1300 audit(1757464869.808:195): arch=c000003e syscall=188 success=no exit=-22 a0=c000a6f0b0 a1=c000a45ea8 a2=c000a6f080 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.808000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:09.808000 audit[1792]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a6f0b0 a1=c000a45ea8 a2=c000a6f080 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.814905 1792 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.815930 1792 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.816430 1792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:41:09.822113 kubelet[1792]: W0910 00:41:09.816622 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:09.822113 kubelet[1792]: E0910 00:41:09.816697 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.816704 1792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:41:09.822113 kubelet[1792]: E0910 00:41:09.817212 1792 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.817292 1792 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.817595 1792 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.817687 1792 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:41:09.822113 kubelet[1792]: I0910 00:41:09.818016 1792 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:41:09.822113 kubelet[1792]: W0910 00:41:09.818135 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:09.822582 kubelet[1792]: E0910 00:41:09.818197 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:09.822582 kubelet[1792]: E0910 00:41:09.818580 1792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" Sep 10 00:41:09.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:09.823260 kubelet[1792]: E0910 00:41:09.822088 1792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4fcd61c8eea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:41:09.807623914 +0000 UTC m=+0.486782975,LastTimestamp:2025-09-10 00:41:09.807623914 +0000 UTC m=+0.486782975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:41:09.808000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:09.829896 kernel: audit: type=1327 audit(1757464869.808:195): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:09.829936 kernel: audit: type=1400 audit(1757464869.808:196): avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:09.808000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:09.808000 audit[1792]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a61ae0 a1=c000a45ec0 a2=c000a6f140 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:09.812000 audit[1805]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:09.812000 audit[1805]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc21c727d0 a2=0 a3=7ffc21c727bc items=0 ppid=1792 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 10 00:41:09.813000 audit[1806]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:09.813000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc00ed6d50 a2=0 a3=7ffc00ed6d3c items=0 ppid=1792 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 10 00:41:09.818000 audit[1808]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:09.818000 audit[1808]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffa3848b20 a2=0 a3=7fffa3848b0c items=0 ppid=1792 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 10 00:41:09.821000 audit[1810]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:09.821000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff274d7bc0 a2=0 a3=7fff274d7bac items=0 ppid=1792 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:09.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 10 00:41:10.005904 kubelet[1792]: E0910 00:41:10.004335 1792 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:41:10.007481 kubelet[1792]: I0910 00:41:10.007181 1792 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:41:10.007481 kubelet[1792]: I0910 00:41:10.007337 1792 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:41:10.008996 kubelet[1792]: I0910 00:41:10.008964 1792 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:41:10.009192 kubelet[1792]: E0910 00:41:10.009154 1792 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:41:10.017000 audit[1816]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:10.017000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd83dbf120 a2=0 a3=7ffd83dbf10c items=0 ppid=1792 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 10 00:41:10.018509 kubelet[1792]: I0910 00:41:10.018004 1792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:41:10.018000 audit[1817]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:10.018000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeddb96810 a2=0 a3=7ffeddb967fc items=0 ppid=1792 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 10 00:41:10.021000 audit[1818]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:10.021000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe538b8790 a2=0 a3=7ffe538b877c items=0 ppid=1792 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 10 00:41:10.023443 kubelet[1792]: E0910 00:41:10.023401 1792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" Sep 10 00:41:10.023443 kubelet[1792]: I0910 00:41:10.023423 1792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:41:10.023535 kubelet[1792]: I0910 00:41:10.023470 1792 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:41:10.023535 kubelet[1792]: I0910 00:41:10.023509 1792 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:41:10.023588 kubelet[1792]: E0910 00:41:10.023558 1792 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:41:10.023000 audit[1819]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:10.023000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff38ad4e60 a2=0 a3=7fff38ad4e4c items=0 ppid=1792 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 10 00:41:10.024000 audit[1820]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:10.024000 audit[1820]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8de9a890 a2=0 a3=7ffd8de9a87c items=0 ppid=1792 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 10 00:41:10.025163 kubelet[1792]: W0910 00:41:10.025099 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:10.025163 kubelet[1792]: E0910 00:41:10.025163 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:10.025000 audit[1821]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:10.025000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe156fe8e0 a2=0 a3=7ffe156fe8cc items=0 ppid=1792 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 10 00:41:10.025000 audit[1822]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:10.025000 audit[1822]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe967734f0 a2=0 a3=7ffe967734dc items=0 ppid=1792 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 10 00:41:10.026000 audit[1823]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:10.026000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd774ce330 a2=0 a3=7ffd774ce31c items=0 ppid=1792 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 10 00:41:10.031524 kubelet[1792]: I0910 00:41:10.031503 1792 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:41:10.031611 kubelet[1792]: I0910 00:41:10.031532 1792 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:41:10.031611 kubelet[1792]: I0910 00:41:10.031557 1792 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:41:10.104896 kubelet[1792]: E0910 00:41:10.104814 1792 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:41:10.124370 kubelet[1792]: E0910 00:41:10.124302 1792 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:41:10.205757 kubelet[1792]: E0910 00:41:10.205703 1792 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:41:10.306878 kubelet[1792]: E0910 00:41:10.306669 1792 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:41:10.325043 kubelet[1792]: E0910 00:41:10.324951 1792 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:41:10.383080 kubelet[1792]: I0910 00:41:10.383014 1792 policy_none.go:49] "None policy: Start" Sep 10 00:41:10.384031 kubelet[1792]: I0910 00:41:10.384016 1792 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:41:10.384080 kubelet[1792]: I0910 00:41:10.384038 1792 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:41:10.390727 kubelet[1792]: I0910 00:41:10.390682 1792 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:41:10.390000 audit[1792]: AVC avc: denied { mac_admin } for pid=1792 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:10.390000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:10.390000 audit[1792]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000544630 a1=c0005307b0 a2=c000544600 a3=25 items=0 ppid=1 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:10.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:10.391021 kubelet[1792]: I0910 00:41:10.390777 1792 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 10 00:41:10.391021 kubelet[1792]: I0910 00:41:10.390941 1792 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:41:10.391021 kubelet[1792]: I0910 00:41:10.390964 1792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:41:10.391989 kubelet[1792]: I0910 00:41:10.391817 1792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:41:10.392620 kubelet[1792]: E0910 00:41:10.392585 1792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:41:10.424543 kubelet[1792]: E0910 00:41:10.424499 1792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" Sep 10 00:41:10.493044 kubelet[1792]: I0910 00:41:10.492974 1792 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:10.493579 kubelet[1792]: E0910 00:41:10.493546 1792 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Sep 10 00:41:10.695407 kubelet[1792]: I0910 00:41:10.695377 1792 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:10.695727 kubelet[1792]: E0910 00:41:10.695701 1792 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Sep 10 00:41:10.809127 kubelet[1792]: W0910 00:41:10.809033 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:10.809127 kubelet[1792]: E0910 00:41:10.809124 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:10.839674 kubelet[1792]: W0910 00:41:10.839606 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:10.839674 kubelet[1792]: E0910 00:41:10.839669 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:10.909095 kubelet[1792]: I0910 00:41:10.909045 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:10.909095 kubelet[1792]: I0910 00:41:10.909093 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:10.909287 kubelet[1792]: I0910 00:41:10.909117 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:10.909287 kubelet[1792]: I0910 00:41:10.909147 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:10.909287 kubelet[1792]: I0910 00:41:10.909183 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:10.909287 kubelet[1792]: I0910 00:41:10.909207 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:41:10.909287 kubelet[1792]: I0910 00:41:10.909233 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:10.909428 kubelet[1792]: I0910 00:41:10.909254 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:10.909428 kubelet[1792]: I0910 00:41:10.909293 1792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:11.030023 kubelet[1792]: E0910 00:41:11.029899 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:11.030484 env[1311]: time="2025-09-10T00:41:11.030450880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:11.030928 kubelet[1792]: E0910 00:41:11.030904 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:11.031214 kubelet[1792]: E0910 00:41:11.031173 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:11.031295 env[1311]: time="2025-09-10T00:41:11.031193621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:11.031556 env[1311]: time="2025-09-10T00:41:11.031527817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:325f13d3b5d3bb73d3672f640ba13e1e,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:11.097755 kubelet[1792]: I0910 00:41:11.097719 1792 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:11.098096 kubelet[1792]: E0910 00:41:11.098061 1792 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Sep 10 00:41:11.155242 kubelet[1792]: W0910 00:41:11.155151 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:11.155242 kubelet[1792]: E0910 00:41:11.155235 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:11.225399 kubelet[1792]: E0910 00:41:11.225346 1792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="1.6s" Sep 10 00:41:11.610159 kubelet[1792]: W0910 00:41:11.610089 1792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused Sep 10 00:41:11.610159 kubelet[1792]: E0910 00:41:11.610154 1792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:11.712797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933557473.mount: Deactivated successfully. Sep 10 00:41:11.721224 env[1311]: time="2025-09-10T00:41:11.721101601Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.724670 env[1311]: time="2025-09-10T00:41:11.724613676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.725794 env[1311]: time="2025-09-10T00:41:11.725755505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.727101 env[1311]: time="2025-09-10T00:41:11.727076747Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.730302 env[1311]: time="2025-09-10T00:41:11.730236922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.731430 env[1311]: time="2025-09-10T00:41:11.731405272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.732687 env[1311]: time="2025-09-10T00:41:11.732663143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.734797 env[1311]: time="2025-09-10T00:41:11.734744829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.736689 env[1311]: time="2025-09-10T00:41:11.736662313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.738396 env[1311]: time="2025-09-10T00:41:11.738353477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.739139 env[1311]: time="2025-09-10T00:41:11.739118361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.740803 env[1311]: time="2025-09-10T00:41:11.740733941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:11.783720 env[1311]: time="2025-09-10T00:41:11.783598214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:11.783946 env[1311]: time="2025-09-10T00:41:11.783676112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:11.783946 env[1311]: time="2025-09-10T00:41:11.783692432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:11.783946 env[1311]: time="2025-09-10T00:41:11.783915597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21fcb77ad0bcf1f095a32237c2f1187230a86415db81956019e722fb4677ce4c pid=1834 runtime=io.containerd.runc.v2 Sep 10 00:41:11.785035 env[1311]: time="2025-09-10T00:41:11.784966092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:11.785035 env[1311]: time="2025-09-10T00:41:11.785032208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:11.785259 env[1311]: time="2025-09-10T00:41:11.785055423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:11.785259 env[1311]: time="2025-09-10T00:41:11.785164931Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6567f0fac1eacb7f9f41de0c0ba473bac7b551356b4a71ab1f3f75be2a9b6ea pid=1856 runtime=io.containerd.runc.v2 Sep 10 00:41:11.786042 env[1311]: time="2025-09-10T00:41:11.785982194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:11.786042 env[1311]: time="2025-09-10T00:41:11.786014986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:11.786163 env[1311]: time="2025-09-10T00:41:11.786028372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:11.786234 env[1311]: time="2025-09-10T00:41:11.786202172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b96ff03b1d102c88a3b115abc2fc37110e4093be1051a9b8f81a7dd99798012a pid=1853 runtime=io.containerd.runc.v2 Sep 10 00:41:11.787456 kubelet[1792]: E0910 00:41:11.787421 1792 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:41:11.902057 kubelet[1792]: I0910 00:41:11.899799 1792 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:11.902057 kubelet[1792]: E0910 00:41:11.900241 1792 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Sep 10 00:41:12.076427 env[1311]: time="2025-09-10T00:41:12.076312946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"b96ff03b1d102c88a3b115abc2fc37110e4093be1051a9b8f81a7dd99798012a\"" Sep 10 00:41:12.078837 kubelet[1792]: E0910 00:41:12.078740 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:12.083702 env[1311]: time="2025-09-10T00:41:12.083659296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6567f0fac1eacb7f9f41de0c0ba473bac7b551356b4a71ab1f3f75be2a9b6ea\"" Sep 10 00:41:12.085490 kubelet[1792]: E0910 00:41:12.085169 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:12.086209 env[1311]: time="2025-09-10T00:41:12.086165104Z" level=info msg="CreateContainer within sandbox \"b96ff03b1d102c88a3b115abc2fc37110e4093be1051a9b8f81a7dd99798012a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:41:12.095534 env[1311]: time="2025-09-10T00:41:12.095485362Z" level=info msg="CreateContainer within sandbox \"b6567f0fac1eacb7f9f41de0c0ba473bac7b551356b4a71ab1f3f75be2a9b6ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:41:12.097913 env[1311]: time="2025-09-10T00:41:12.097868969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:325f13d3b5d3bb73d3672f640ba13e1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"21fcb77ad0bcf1f095a32237c2f1187230a86415db81956019e722fb4677ce4c\"" Sep 10 00:41:12.099431 kubelet[1792]: E0910 00:41:12.099403 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:12.100993 env[1311]: time="2025-09-10T00:41:12.100964577Z" level=info msg="CreateContainer within sandbox \"21fcb77ad0bcf1f095a32237c2f1187230a86415db81956019e722fb4677ce4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:41:12.111048 env[1311]: time="2025-09-10T00:41:12.111003740Z" level=info msg="CreateContainer within sandbox \"b96ff03b1d102c88a3b115abc2fc37110e4093be1051a9b8f81a7dd99798012a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6c4fe7a135cfdf3314cf6049324a5ab2a8a1f4b97f2d9c64482e13608f76c9aa\"" Sep 10 00:41:12.111836 env[1311]: time="2025-09-10T00:41:12.111807446Z" level=info msg="StartContainer for \"6c4fe7a135cfdf3314cf6049324a5ab2a8a1f4b97f2d9c64482e13608f76c9aa\"" Sep 10 00:41:12.124478 env[1311]: time="2025-09-10T00:41:12.124424684Z" level=info msg="CreateContainer within sandbox \"b6567f0fac1eacb7f9f41de0c0ba473bac7b551356b4a71ab1f3f75be2a9b6ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86c77f1ad79ef587cc2722fc8007d1fd4998d641f0e3ffe15528112998ec110f\"" Sep 10 00:41:12.125617 env[1311]: time="2025-09-10T00:41:12.125577924Z" level=info msg="StartContainer for \"86c77f1ad79ef587cc2722fc8007d1fd4998d641f0e3ffe15528112998ec110f\"" Sep 10 00:41:12.130286 env[1311]: time="2025-09-10T00:41:12.130224096Z" level=info msg="CreateContainer within sandbox \"21fcb77ad0bcf1f095a32237c2f1187230a86415db81956019e722fb4677ce4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"465c77431b41f6fd2f0c280064f9aeb9ef64c6f8b6192b54f4f44d1d5ecefd07\"" Sep 10 00:41:12.131096 env[1311]: time="2025-09-10T00:41:12.131040797Z" level=info msg="StartContainer for \"465c77431b41f6fd2f0c280064f9aeb9ef64c6f8b6192b54f4f44d1d5ecefd07\"" Sep 10 00:41:12.325253 env[1311]: time="2025-09-10T00:41:12.325188229Z" level=info msg="StartContainer for \"6c4fe7a135cfdf3314cf6049324a5ab2a8a1f4b97f2d9c64482e13608f76c9aa\" returns successfully" Sep 10 00:41:12.344313 env[1311]: time="2025-09-10T00:41:12.344266857Z" level=info msg="StartContainer for \"465c77431b41f6fd2f0c280064f9aeb9ef64c6f8b6192b54f4f44d1d5ecefd07\" returns successfully" Sep 10 00:41:12.346613 env[1311]: time="2025-09-10T00:41:12.346589848Z" level=info msg="StartContainer for \"86c77f1ad79ef587cc2722fc8007d1fd4998d641f0e3ffe15528112998ec110f\" returns successfully" Sep 10 00:41:13.042587 kubelet[1792]: E0910 00:41:13.042557 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:13.045162 kubelet[1792]: E0910 00:41:13.045125 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:13.046224 kubelet[1792]: E0910 00:41:13.046206 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:13.489297 update_engine[1301]: I0910 00:41:13.489180 1301 update_attempter.cc:509] Updating boot flags... Sep 10 00:41:13.501948 kubelet[1792]: I0910 00:41:13.501913 1792 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:14.249565 kubelet[1792]: E0910 00:41:14.249503 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:14.571446 kubelet[1792]: I0910 00:41:14.571125 1792 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:41:14.571649 kubelet[1792]: E0910 00:41:14.571626 1792 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:41:14.900804 kubelet[1792]: I0910 00:41:14.900646 1792 apiserver.go:52] "Watching apiserver" Sep 10 00:41:14.918416 kubelet[1792]: I0910 00:41:14.918355 1792 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:41:15.642398 kubelet[1792]: E0910 00:41:15.642359 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:16.252881 kubelet[1792]: E0910 00:41:16.252829 1792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:16.431867 systemd[1]: Reloading. Sep 10 00:41:16.508272 /usr/lib/systemd/system-generators/torcx-generator[2111]: time="2025-09-10T00:41:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:41:16.508801 /usr/lib/systemd/system-generators/torcx-generator[2111]: time="2025-09-10T00:41:16Z" level=info msg="torcx already run" Sep 10 00:41:16.590595 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:41:16.590612 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:41:16.609315 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:41:16.685996 kubelet[1792]: I0910 00:41:16.685948 1792 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:41:16.686123 systemd[1]: Stopping kubelet.service... Sep 10 00:41:16.710757 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:41:16.711118 systemd[1]: Stopped kubelet.service. Sep 10 00:41:16.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:16.712203 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 10 00:41:16.712251 kernel: audit: type=1131 audit(1757464876.710:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:16.713121 systemd[1]: Starting kubelet.service... Sep 10 00:41:16.806667 systemd[1]: Started kubelet.service. Sep 10 00:41:16.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:16.810355 kernel: audit: type=1130 audit(1757464876.806:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:16.849201 kubelet[2167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:41:16.849201 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:41:16.849201 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:41:16.850086 kubelet[2167]: I0910 00:41:16.849262 2167 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:41:16.855110 kubelet[2167]: I0910 00:41:16.855084 2167 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:41:16.855188 kubelet[2167]: I0910 00:41:16.855173 2167 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:41:16.855519 kubelet[2167]: I0910 00:41:16.855504 2167 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:41:16.861494 kubelet[2167]: I0910 00:41:16.861449 2167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:41:16.864120 kubelet[2167]: I0910 00:41:16.864080 2167 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:41:16.866934 kubelet[2167]: E0910 00:41:16.866895 2167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:41:16.866934 kubelet[2167]: I0910 00:41:16.866929 2167 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:41:16.871010 kubelet[2167]: I0910 00:41:16.870985 2167 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:41:16.871474 kubelet[2167]: I0910 00:41:16.871456 2167 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:41:16.871609 kubelet[2167]: I0910 00:41:16.871575 2167 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:41:16.871801 kubelet[2167]: I0910 00:41:16.871603 2167 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:41:16.871884 kubelet[2167]: I0910 00:41:16.871803 2167 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:41:16.871884 kubelet[2167]: I0910 00:41:16.871812 2167 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:41:16.871884 kubelet[2167]: I0910 00:41:16.871836 2167 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:41:16.871949 kubelet[2167]: I0910 00:41:16.871926 2167 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:41:16.871970 kubelet[2167]: I0910 00:41:16.871952 2167 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:41:16.872000 kubelet[2167]: I0910 00:41:16.871980 2167 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:41:16.872000 kubelet[2167]: I0910 00:41:16.871990 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:41:16.872665 kubelet[2167]: I0910 00:41:16.872619 2167 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:41:16.873171 kubelet[2167]: I0910 00:41:16.873144 2167 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:41:16.873578 kubelet[2167]: I0910 00:41:16.873559 2167 server.go:1274] "Started kubelet" Sep 10 00:41:16.879776 kubelet[2167]: I0910 00:41:16.879738 2167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:41:16.879989 kubelet[2167]: I0910 00:41:16.879958 2167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:41:16.880405 kubelet[2167]: I0910 00:41:16.880388 2167 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:41:16.881802 kubelet[2167]: I0910 00:41:16.881776 2167 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:41:16.881922 kubelet[2167]: E0910 00:41:16.881902 2167 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:41:16.881000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:16.882059 kubelet[2167]: I0910 00:41:16.881951 2167 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 10 00:41:16.882170 kubelet[2167]: I0910 00:41:16.882154 2167 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 10 00:41:16.882300 kubelet[2167]: I0910 00:41:16.882285 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:41:16.883052 kubelet[2167]: I0910 00:41:16.883028 2167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:41:16.891406 kernel: audit: type=1400 audit(1757464876.881:212): avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:16.891539 kernel: audit: type=1401 audit(1757464876.881:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:16.891566 kernel: audit: type=1300 audit(1757464876.881:212): arch=c000003e syscall=188 success=no exit=-22 a0=c000cf0de0 a1=c000ccbe00 a2=c000cf0db0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:16.881000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:16.881000 audit[2167]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cf0de0 a1=c000ccbe00 a2=c000cf0db0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:16.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:16.891845 kubelet[2167]: I0910 00:41:16.889544 2167 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:41:16.891845 kubelet[2167]: I0910 00:41:16.890573 2167 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:41:16.891845 kubelet[2167]: I0910 00:41:16.891042 2167 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:41:16.892402 kubelet[2167]: I0910 00:41:16.892376 2167 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:41:16.892512 kubelet[2167]: I0910 00:41:16.892481 2167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:41:16.894859 kubelet[2167]: I0910 00:41:16.894704 2167 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:41:16.896054 kernel: audit: type=1327 audit(1757464876.881:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:16.881000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:16.896537 kubelet[2167]: I0910 00:41:16.896514 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:41:16.899246 kernel: audit: type=1400 audit(1757464876.881:213): avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:16.901056 kernel: audit: type=1401 audit(1757464876.881:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:16.881000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:16.901132 kubelet[2167]: I0910 00:41:16.899513 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:41:16.901132 kubelet[2167]: I0910 00:41:16.899532 2167 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:41:16.901132 kubelet[2167]: I0910 00:41:16.899637 2167 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:41:16.901132 kubelet[2167]: E0910 00:41:16.899688 2167 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:41:16.905813 kernel: audit: type=1300 audit(1757464876.881:213): arch=c000003e syscall=188 success=no exit=-22 a0=c0000c5a00 a1=c000590cf0 a2=c000857170 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:16.881000 audit[2167]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0000c5a00 a1=c000590cf0 a2=c000857170 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:16.909938 kernel: audit: type=1327 audit(1757464876.881:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:16.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:16.934395 kubelet[2167]: I0910 00:41:16.934368 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:41:16.934574 kubelet[2167]: I0910 00:41:16.934555 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:41:16.934671 kubelet[2167]: I0910 00:41:16.934657 2167 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:41:16.934882 kubelet[2167]: I0910 00:41:16.934866 2167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:41:16.934973 kubelet[2167]: I0910 00:41:16.934943 2167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:41:16.935048 kubelet[2167]: I0910 00:41:16.935033 2167 policy_none.go:49] "None policy: Start" Sep 10 00:41:16.935750 kubelet[2167]: I0910 00:41:16.935717 2167 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:41:16.935816 kubelet[2167]: I0910 00:41:16.935762 2167 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:41:16.935953 kubelet[2167]: I0910 00:41:16.935939 2167 state_mem.go:75] "Updated machine memory state" Sep 10 00:41:16.937122 kubelet[2167]: I0910 00:41:16.937094 2167 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:41:16.936000 audit[2167]: AVC avc: denied { mac_admin } for pid=2167 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:41:16.936000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 10 00:41:16.936000 audit[2167]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009efe00 a1=c000da5248 a2=c0009efdd0 a3=25 items=0 ppid=1 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:16.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 10 00:41:16.937392 kubelet[2167]: I0910 00:41:16.937171 2167 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 10 00:41:16.937392 kubelet[2167]: I0910 00:41:16.937306 2167 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:41:16.937392 kubelet[2167]: I0910 00:41:16.937317 2167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:41:16.937902 kubelet[2167]: I0910 00:41:16.937875 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:41:17.006652 kubelet[2167]: E0910 00:41:17.006609 2167 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:17.044875 kubelet[2167]: I0910 00:41:17.044854 2167 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:41:17.050179 kubelet[2167]: I0910 00:41:17.050163 2167 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:41:17.050280 kubelet[2167]: I0910 00:41:17.050245 2167 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:41:17.091931 kubelet[2167]: I0910 00:41:17.091757 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.091931 kubelet[2167]: I0910 00:41:17.091805 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.091931 kubelet[2167]: I0910 00:41:17.091841 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.091931 kubelet[2167]: I0910 00:41:17.091882 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:41:17.091931 kubelet[2167]: I0910 00:41:17.091906 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:17.092261 kubelet[2167]: I0910 00:41:17.091919 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:17.092261 kubelet[2167]: I0910 00:41:17.091937 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/325f13d3b5d3bb73d3672f640ba13e1e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"325f13d3b5d3bb73d3672f640ba13e1e\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:17.092261 kubelet[2167]: I0910 00:41:17.091961 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.092261 kubelet[2167]: I0910 00:41:17.091976 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.307719 kubelet[2167]: E0910 00:41:17.307661 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:17.307719 kubelet[2167]: E0910 00:41:17.307704 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:17.307949 kubelet[2167]: E0910 00:41:17.307661 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:17.873010 kubelet[2167]: I0910 00:41:17.872944 2167 apiserver.go:52] "Watching apiserver" Sep 10 00:41:17.890370 kubelet[2167]: I0910 00:41:17.890282 2167 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:41:17.917278 kubelet[2167]: E0910 00:41:17.917233 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:17.967458 kubelet[2167]: E0910 00:41:17.967403 2167 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:41:17.967886 kubelet[2167]: E0910 00:41:17.967859 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:17.974619 kubelet[2167]: E0910 00:41:17.974570 2167 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:41:17.974808 kubelet[2167]: E0910 00:41:17.974776 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:18.066531 kubelet[2167]: I0910 00:41:18.066457 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.066427526 podStartE2EDuration="3.066427526s" podCreationTimestamp="2025-09-10 00:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:18.006085316 +0000 UTC m=+1.194993408" watchObservedRunningTime="2025-09-10 00:41:18.066427526 +0000 UTC m=+1.255335618" Sep 10 00:41:18.066752 kubelet[2167]: I0910 00:41:18.066585 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.066581747 podStartE2EDuration="1.066581747s" podCreationTimestamp="2025-09-10 00:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:18.066248598 +0000 UTC m=+1.255156690" watchObservedRunningTime="2025-09-10 00:41:18.066581747 +0000 UTC m=+1.255489839" Sep 10 00:41:18.137319 kubelet[2167]: I0910 00:41:18.137146 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.137120672 podStartE2EDuration="1.137120672s" podCreationTimestamp="2025-09-10 00:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:18.090312227 +0000 UTC m=+1.279220319" watchObservedRunningTime="2025-09-10 00:41:18.137120672 +0000 UTC m=+1.326028764" Sep 10 00:41:18.918423 kubelet[2167]: E0910 00:41:18.918367 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:18.918423 kubelet[2167]: E0910 00:41:18.918412 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:19.967313 kubelet[2167]: E0910 00:41:19.967247 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:20.522631 kubelet[2167]: E0910 00:41:20.522572 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:23.500786 kubelet[2167]: I0910 00:41:23.500727 2167 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:41:23.501382 env[1311]: time="2025-09-10T00:41:23.501204926Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:41:23.501721 kubelet[2167]: I0910 00:41:23.501452 2167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:41:24.542459 kubelet[2167]: I0910 00:41:24.542392 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27a67d83-9886-434b-887a-4766619eee03-kube-proxy\") pod \"kube-proxy-zq5qs\" (UID: \"27a67d83-9886-434b-887a-4766619eee03\") " pod="kube-system/kube-proxy-zq5qs" Sep 10 00:41:24.543001 kubelet[2167]: I0910 00:41:24.542436 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a67d83-9886-434b-887a-4766619eee03-xtables-lock\") pod \"kube-proxy-zq5qs\" (UID: \"27a67d83-9886-434b-887a-4766619eee03\") " pod="kube-system/kube-proxy-zq5qs" Sep 10 00:41:24.543001 kubelet[2167]: I0910 00:41:24.542497 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a67d83-9886-434b-887a-4766619eee03-lib-modules\") pod \"kube-proxy-zq5qs\" (UID: \"27a67d83-9886-434b-887a-4766619eee03\") " pod="kube-system/kube-proxy-zq5qs" Sep 10 00:41:24.543001 kubelet[2167]: I0910 00:41:24.542514 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv229\" (UniqueName: \"kubernetes.io/projected/27a67d83-9886-434b-887a-4766619eee03-kube-api-access-jv229\") pod \"kube-proxy-zq5qs\" (UID: \"27a67d83-9886-434b-887a-4766619eee03\") " pod="kube-system/kube-proxy-zq5qs" Sep 10 00:41:24.643877 kubelet[2167]: I0910 00:41:24.643795 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50c8b529-95ab-48a0-8185-5e975445ce16-var-lib-calico\") pod \"tigera-operator-58fc44c59b-4kgrh\" (UID: \"50c8b529-95ab-48a0-8185-5e975445ce16\") " pod="tigera-operator/tigera-operator-58fc44c59b-4kgrh" Sep 10 00:41:24.643877 kubelet[2167]: I0910 00:41:24.643867 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hbh\" (UniqueName: \"kubernetes.io/projected/50c8b529-95ab-48a0-8185-5e975445ce16-kube-api-access-b2hbh\") pod \"tigera-operator-58fc44c59b-4kgrh\" (UID: \"50c8b529-95ab-48a0-8185-5e975445ce16\") " pod="tigera-operator/tigera-operator-58fc44c59b-4kgrh" Sep 10 00:41:24.650388 kubelet[2167]: I0910 00:41:24.650340 2167 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 10 00:41:24.742340 kubelet[2167]: E0910 00:41:24.742277 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:24.742861 env[1311]: time="2025-09-10T00:41:24.742815077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zq5qs,Uid:27a67d83-9886-434b-887a-4766619eee03,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:24.761450 env[1311]: time="2025-09-10T00:41:24.761383296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:24.761450 env[1311]: time="2025-09-10T00:41:24.761450854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:24.761450 env[1311]: time="2025-09-10T00:41:24.761464760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:24.761686 env[1311]: time="2025-09-10T00:41:24.761621506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df57b780bc180558573f90835af9885008c8326bc444a34d43c929889e9c17ad pid=2225 runtime=io.containerd.runc.v2 Sep 10 00:41:24.794920 env[1311]: time="2025-09-10T00:41:24.794436726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zq5qs,Uid:27a67d83-9886-434b-887a-4766619eee03,Namespace:kube-system,Attempt:0,} returns sandbox id \"df57b780bc180558573f90835af9885008c8326bc444a34d43c929889e9c17ad\"" Sep 10 00:41:24.795406 kubelet[2167]: E0910 00:41:24.795371 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:24.797784 env[1311]: time="2025-09-10T00:41:24.797742019Z" level=info msg="CreateContainer within sandbox \"df57b780bc180558573f90835af9885008c8326bc444a34d43c929889e9c17ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:41:24.814261 env[1311]: time="2025-09-10T00:41:24.814192856Z" level=info msg="CreateContainer within sandbox \"df57b780bc180558573f90835af9885008c8326bc444a34d43c929889e9c17ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d682aee0c4d45cc0634c5c0813a9dd4733d399d9ef0d4aacacfb983d6ad708bb\"" Sep 10 00:41:24.816131 env[1311]: time="2025-09-10T00:41:24.815035966Z" level=info msg="StartContainer for \"d682aee0c4d45cc0634c5c0813a9dd4733d399d9ef0d4aacacfb983d6ad708bb\"" Sep 10 00:41:24.865190 env[1311]: time="2025-09-10T00:41:24.865112290Z" level=info msg="StartContainer for \"d682aee0c4d45cc0634c5c0813a9dd4733d399d9ef0d4aacacfb983d6ad708bb\" returns successfully" Sep 10 00:41:24.872009 env[1311]: time="2025-09-10T00:41:24.871961448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-4kgrh,Uid:50c8b529-95ab-48a0-8185-5e975445ce16,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:41:24.887186 env[1311]: time="2025-09-10T00:41:24.887019488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:24.887186 env[1311]: time="2025-09-10T00:41:24.887054593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:24.887186 env[1311]: time="2025-09-10T00:41:24.887063831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:24.887452 env[1311]: time="2025-09-10T00:41:24.887189488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96ddfd7400affa8a6cb5b5599e39c816fbb08aa80c05400b49d4bff9bb423b43 pid=2299 runtime=io.containerd.runc.v2 Sep 10 00:41:24.929485 kubelet[2167]: E0910 00:41:24.928492 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:24.938199 env[1311]: time="2025-09-10T00:41:24.935981851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-4kgrh,Uid:50c8b529-95ab-48a0-8185-5e975445ce16,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"96ddfd7400affa8a6cb5b5599e39c816fbb08aa80c05400b49d4bff9bb423b43\"" Sep 10 00:41:24.943355 env[1311]: time="2025-09-10T00:41:24.940408238Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:41:24.996000 audit[2370]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:24.999484 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 10 00:41:24.999635 kernel: audit: type=1325 audit(1757464884.996:215): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:24.999677 kernel: audit: type=1300 audit(1757464884.996:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9badfa30 a2=0 a3=7ffc9badfa1c items=0 ppid=2276 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:24.996000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9badfa30 a2=0 a3=7ffc9badfa1c items=0 ppid=2276 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:24.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 10 00:41:25.001000 audit[2371]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.010240 kernel: audit: type=1327 audit(1757464884.996:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 10 00:41:25.010284 kernel: audit: type=1325 audit(1757464885.001:216): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.010300 kernel: audit: type=1300 audit(1757464885.001:216): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea5b7aaa0 a2=0 a3=7ffea5b7aa8c items=0 ppid=2276 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.001000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea5b7aaa0 a2=0 a3=7ffea5b7aa8c items=0 ppid=2276 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 10 00:41:25.017491 kernel: audit: type=1327 audit(1757464885.001:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 10 00:41:25.017528 kernel: audit: type=1325 audit(1757464885.002:217): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.002000 audit[2372]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.002000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeec75bc80 a2=0 a3=7ffeec75bc6c items=0 ppid=2276 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.024618 kernel: audit: type=1300 audit(1757464885.002:217): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeec75bc80 a2=0 a3=7ffeec75bc6c items=0 ppid=2276 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.024672 kernel: audit: type=1327 audit(1757464885.002:217): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 10 00:41:25.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 10 00:41:25.004000 audit[2373]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.029006 kernel: audit: type=1325 audit(1757464885.004:218): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.004000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc14378aa0 a2=0 a3=7ffc14378a8c items=0 ppid=2276 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 10 00:41:25.006000 audit[2374]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.006000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb644ec20 a2=0 a3=7fffb644ec0c items=0 ppid=2276 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 10 00:41:25.007000 audit[2375]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.007000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff85881fe0 a2=0 a3=7fff85881fcc items=0 ppid=2276 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 10 00:41:25.099000 audit[2376]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.099000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe8400d1d0 a2=0 a3=7ffe8400d1bc items=0 ppid=2276 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 10 00:41:25.102000 audit[2378]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.102000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff2ac31360 a2=0 a3=7fff2ac3134c items=0 ppid=2276 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 10 00:41:25.106000 audit[2381]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.106000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb367ddd0 a2=0 a3=7ffcb367ddbc items=0 ppid=2276 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 10 00:41:25.107000 audit[2382]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.107000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffedc9aed0 a2=0 a3=7fffedc9aebc items=0 ppid=2276 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 10 00:41:25.111000 audit[2384]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.111000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd771b8a30 a2=0 a3=7ffd771b8a1c items=0 ppid=2276 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 10 00:41:25.112000 audit[2385]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.112000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff014b2410 a2=0 a3=7fff014b23fc items=0 ppid=2276 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 10 00:41:25.115000 audit[2387]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.115000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde1f2dd80 a2=0 a3=7ffde1f2dd6c items=0 ppid=2276 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 10 00:41:25.118000 audit[2390]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.118000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe323bb50 a2=0 a3=7fffe323bb3c items=0 ppid=2276 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 10 00:41:25.119000 audit[2391]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.119000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb4ffbaa0 a2=0 a3=7fffb4ffba8c items=0 ppid=2276 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 10 00:41:25.122000 audit[2393]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.122000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4c9bd360 a2=0 a3=7ffc4c9bd34c items=0 ppid=2276 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.122000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 10 00:41:25.123000 audit[2394]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.123000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe108d3250 a2=0 a3=7ffe108d323c items=0 ppid=2276 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 10 00:41:25.125000 audit[2396]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.125000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff030d3ff0 a2=0 a3=7fff030d3fdc items=0 ppid=2276 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 10 00:41:25.128000 audit[2399]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.128000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce77d4930 a2=0 a3=7ffce77d491c items=0 ppid=2276 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 10 00:41:25.132000 audit[2402]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.132000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6c644a70 a2=0 a3=7fff6c644a5c items=0 ppid=2276 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 10 00:41:25.132000 audit[2403]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.132000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcab3ed1c0 a2=0 a3=7ffcab3ed1ac items=0 ppid=2276 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 10 00:41:25.135000 audit[2405]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.135000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe3fd5c600 a2=0 a3=7ffe3fd5c5ec items=0 ppid=2276 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.135000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 10 00:41:25.138000 audit[2408]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.138000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda7f68a50 a2=0 a3=7ffda7f68a3c items=0 ppid=2276 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 10 00:41:25.139000 audit[2409]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.139000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4eadac10 a2=0 a3=7fff4eadabfc items=0 ppid=2276 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 10 00:41:25.142000 audit[2411]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 10 00:41:25.142000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc621edf40 a2=0 a3=7ffc621edf2c items=0 ppid=2276 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 10 00:41:25.163000 audit[2417]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:25.163000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffce8b1dde0 a2=0 a3=7ffce8b1ddcc items=0 ppid=2276 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:25.173000 audit[2417]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:25.173000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffce8b1dde0 a2=0 a3=7ffce8b1ddcc items=0 ppid=2276 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:25.175000 audit[2422]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.175000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff001af660 a2=0 a3=7fff001af64c items=0 ppid=2276 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 10 00:41:25.177000 audit[2424]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.177000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdba6b0120 a2=0 a3=7ffdba6b010c items=0 ppid=2276 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 10 00:41:25.181000 audit[2427]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.181000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdfbf629f0 a2=0 a3=7ffdfbf629dc items=0 ppid=2276 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 10 00:41:25.182000 audit[2428]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.182000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff66f0ead0 a2=0 a3=7fff66f0eabc items=0 ppid=2276 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.182000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 10 00:41:25.184000 audit[2430]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.184000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8aad6c90 a2=0 a3=7ffe8aad6c7c items=0 ppid=2276 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 10 00:41:25.185000 audit[2431]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.185000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8e909480 a2=0 a3=7ffc8e90946c items=0 ppid=2276 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.185000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 10 00:41:25.188000 audit[2433]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.188000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec7e128c0 a2=0 a3=7ffec7e128ac items=0 ppid=2276 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 10 00:41:25.192000 audit[2436]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.192000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdf31c6670 a2=0 a3=7ffdf31c665c items=0 ppid=2276 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 10 00:41:25.193000 audit[2437]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.193000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff744fe6e0 a2=0 a3=7fff744fe6cc items=0 ppid=2276 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 10 00:41:25.195000 audit[2439]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.195000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd3d162a0 a2=0 a3=7ffdd3d1628c items=0 ppid=2276 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 10 00:41:25.196000 audit[2440]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.196000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd550333b0 a2=0 a3=7ffd5503339c items=0 ppid=2276 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 10 00:41:25.199000 audit[2442]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.199000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccb85a810 a2=0 a3=7ffccb85a7fc items=0 ppid=2276 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 10 00:41:25.203000 audit[2445]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.203000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0ad40030 a2=0 a3=7fff0ad4001c items=0 ppid=2276 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 10 00:41:25.206000 audit[2448]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.206000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4db4e730 a2=0 a3=7ffc4db4e71c items=0 ppid=2276 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 10 00:41:25.208000 audit[2449]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.208000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd08c63aa0 a2=0 a3=7ffd08c63a8c items=0 ppid=2276 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 10 00:41:25.210000 audit[2451]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.210000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd1121cb80 a2=0 a3=7ffd1121cb6c items=0 ppid=2276 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 10 00:41:25.213000 audit[2454]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.213000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffee4d710f0 a2=0 a3=7ffee4d710dc items=0 ppid=2276 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 10 00:41:25.214000 audit[2455]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.214000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6f2e0500 a2=0 a3=7ffd6f2e04ec items=0 ppid=2276 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 10 00:41:25.216000 audit[2457]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.216000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeb0063850 a2=0 a3=7ffeb006383c items=0 ppid=2276 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 10 00:41:25.217000 audit[2458]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.217000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddad3c9e0 a2=0 a3=7ffddad3c9cc items=0 ppid=2276 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 10 00:41:25.219000 audit[2460]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.219000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed8caaa20 a2=0 a3=7ffed8caaa0c items=0 ppid=2276 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 10 00:41:25.222000 audit[2463]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 10 00:41:25.222000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd513e0920 a2=0 a3=7ffd513e090c items=0 ppid=2276 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 10 00:41:25.225000 audit[2465]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 10 00:41:25.225000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdb0f68d70 a2=0 a3=7ffdb0f68d5c items=0 ppid=2276 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.225000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:25.225000 audit[2465]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 10 00:41:25.225000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdb0f68d70 a2=0 a3=7ffdb0f68d5c items=0 ppid=2276 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:25.225000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:26.854140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056279249.mount: Deactivated successfully. Sep 10 00:41:27.334208 kubelet[2167]: E0910 00:41:27.334166 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:27.343272 kubelet[2167]: I0910 00:41:27.343212 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zq5qs" podStartSLOduration=3.343157812 podStartE2EDuration="3.343157812s" podCreationTimestamp="2025-09-10 00:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:24.941667994 +0000 UTC m=+8.130576086" watchObservedRunningTime="2025-09-10 00:41:27.343157812 +0000 UTC m=+10.532065924" Sep 10 00:41:28.746722 env[1311]: time="2025-09-10T00:41:28.746637150Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:28.856772 env[1311]: time="2025-09-10T00:41:28.856570570Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:28.885875 env[1311]: time="2025-09-10T00:41:28.885811806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:28.916228 env[1311]: time="2025-09-10T00:41:28.916166529Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:28.916770 env[1311]: time="2025-09-10T00:41:28.916733978Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 00:41:28.918811 env[1311]: time="2025-09-10T00:41:28.918775734Z" level=info msg="CreateContainer within sandbox \"96ddfd7400affa8a6cb5b5599e39c816fbb08aa80c05400b49d4bff9bb423b43\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:41:29.899210 env[1311]: time="2025-09-10T00:41:29.899143880Z" level=info msg="CreateContainer within sandbox \"96ddfd7400affa8a6cb5b5599e39c816fbb08aa80c05400b49d4bff9bb423b43\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"09fb5ceeb6b8605c7de2630eb87ed6411d908d698ad5be6aad84ac798a90f2c3\"" Sep 10 00:41:29.899822 env[1311]: time="2025-09-10T00:41:29.899787823Z" level=info msg="StartContainer for \"09fb5ceeb6b8605c7de2630eb87ed6411d908d698ad5be6aad84ac798a90f2c3\"" Sep 10 00:41:29.972135 kubelet[2167]: E0910 00:41:29.972106 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:30.529149 kubelet[2167]: E0910 00:41:30.529112 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:30.618162 env[1311]: time="2025-09-10T00:41:30.618074337Z" level=info msg="StartContainer for \"09fb5ceeb6b8605c7de2630eb87ed6411d908d698ad5be6aad84ac798a90f2c3\" returns successfully" Sep 10 00:41:35.838176 sudo[1490]: pam_unix(sudo:session): session closed for user root Sep 10 00:41:35.847106 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 10 00:41:35.847197 kernel: audit: type=1106 audit(1757464895.836:266): pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:41:35.847257 kernel: audit: type=1104 audit(1757464895.837:267): pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:41:35.836000 audit[1490]: USER_END pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:41:35.837000 audit[1490]: CRED_DISP pid=1490 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 10 00:41:35.849236 sshd[1486]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:35.849000 audit[1486]: USER_END pid=1486 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:41:35.852245 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:51266.service: Deactivated successfully. Sep 10 00:41:35.855602 kernel: audit: type=1106 audit(1757464895.849:268): pid=1486 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:41:35.853392 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:41:35.854024 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:41:35.854948 systemd-logind[1294]: Removed session 9. Sep 10 00:41:35.849000 audit[1486]: CRED_DISP pid=1486 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:41:35.863434 kernel: audit: type=1104 audit(1757464895.849:269): pid=1486 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:41:35.863507 kernel: audit: type=1131 audit(1757464895.849:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:51266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:35.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.41:22-10.0.0.1:51266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:41:36.224000 audit[2557]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.234112 kernel: audit: type=1325 audit(1757464896.224:271): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.234236 kernel: audit: type=1300 audit(1757464896.224:271): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe3c2bed60 a2=0 a3=7ffe3c2bed4c items=0 ppid=2276 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.224000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe3c2bed60 a2=0 a3=7ffe3c2bed4c items=0 ppid=2276 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:36.237378 kernel: audit: type=1327 audit(1757464896.224:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:36.236000 audit[2557]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.245582 kernel: audit: type=1325 audit(1757464896.236:272): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.245747 kernel: audit: type=1300 audit(1757464896.236:272): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3c2bed60 a2=0 a3=0 items=0 ppid=2276 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.236000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3c2bed60 a2=0 a3=0 items=0 ppid=2276 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:36.253000 audit[2559]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.253000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcd6842600 a2=0 a3=7ffcd68425ec items=0 ppid=2276 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:36.260000 audit[2559]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:36.260000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd6842600 a2=0 a3=0 items=0 ppid=2276 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:36.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:38.194000 audit[2562]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:38.194000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc69c1c180 a2=0 a3=7ffc69c1c16c items=0 ppid=2276 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:38.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:38.200000 audit[2562]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:38.200000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc69c1c180 a2=0 a3=0 items=0 ppid=2276 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:38.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:38.213000 audit[2564]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:38.213000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff49f332c0 a2=0 a3=7fff49f332ac items=0 ppid=2276 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:38.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:38.217000 audit[2564]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:38.217000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff49f332c0 a2=0 a3=0 items=0 ppid=2276 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:38.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:38.358098 kubelet[2167]: I0910 00:41:38.357985 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-4kgrh" podStartSLOduration=10.380354791 podStartE2EDuration="14.357958218s" podCreationTimestamp="2025-09-10 00:41:24 +0000 UTC" firstStartedPulling="2025-09-10 00:41:24.939932652 +0000 UTC m=+8.128840734" lastFinishedPulling="2025-09-10 00:41:28.917536079 +0000 UTC m=+12.106444161" observedRunningTime="2025-09-10 00:41:30.950365748 +0000 UTC m=+14.139273860" watchObservedRunningTime="2025-09-10 00:41:38.357958218 +0000 UTC m=+21.546866310" Sep 10 00:41:38.443588 kubelet[2167]: I0910 00:41:38.443526 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdtt\" (UniqueName: \"kubernetes.io/projected/48990a0e-6464-4ae4-95fe-e9dcd9a2dd93-kube-api-access-rhdtt\") pod \"calico-typha-66f7c8d66-khh45\" (UID: \"48990a0e-6464-4ae4-95fe-e9dcd9a2dd93\") " pod="calico-system/calico-typha-66f7c8d66-khh45" Sep 10 00:41:38.443588 kubelet[2167]: I0910 00:41:38.443588 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48990a0e-6464-4ae4-95fe-e9dcd9a2dd93-tigera-ca-bundle\") pod \"calico-typha-66f7c8d66-khh45\" (UID: \"48990a0e-6464-4ae4-95fe-e9dcd9a2dd93\") " pod="calico-system/calico-typha-66f7c8d66-khh45" Sep 10 00:41:38.443588 kubelet[2167]: I0910 00:41:38.443617 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48990a0e-6464-4ae4-95fe-e9dcd9a2dd93-typha-certs\") pod \"calico-typha-66f7c8d66-khh45\" (UID: \"48990a0e-6464-4ae4-95fe-e9dcd9a2dd93\") " pod="calico-system/calico-typha-66f7c8d66-khh45" Sep 10 00:41:38.663501 kubelet[2167]: E0910 00:41:38.663320 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:38.663965 env[1311]: time="2025-09-10T00:41:38.663920962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f7c8d66-khh45,Uid:48990a0e-6464-4ae4-95fe-e9dcd9a2dd93,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:38.688344 env[1311]: time="2025-09-10T00:41:38.688182291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:38.688344 env[1311]: time="2025-09-10T00:41:38.688251481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:38.688344 env[1311]: time="2025-09-10T00:41:38.688265928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:38.688807 env[1311]: time="2025-09-10T00:41:38.688746010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1022cb2a98a54f1c5fdfb3c07361771ce0feb2013e9e24bf46b5d1f4f38b5b20 pid=2574 runtime=io.containerd.runc.v2 Sep 10 00:41:38.745684 kubelet[2167]: I0910 00:41:38.745619 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/183c6919-8ddd-4eec-a6c8-00734b706821-tigera-ca-bundle\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745684 kubelet[2167]: I0910 00:41:38.745683 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-lib-modules\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745966 kubelet[2167]: I0910 00:41:38.745712 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mppzb\" (UniqueName: \"kubernetes.io/projected/183c6919-8ddd-4eec-a6c8-00734b706821-kube-api-access-mppzb\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745966 kubelet[2167]: I0910 00:41:38.745734 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-flexvol-driver-host\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745966 kubelet[2167]: I0910 00:41:38.745752 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-xtables-lock\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745966 kubelet[2167]: I0910 00:41:38.745770 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/183c6919-8ddd-4eec-a6c8-00734b706821-node-certs\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.745966 kubelet[2167]: I0910 00:41:38.745788 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-policysync\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.746124 kubelet[2167]: I0910 00:41:38.745804 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-cni-log-dir\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.746124 kubelet[2167]: I0910 00:41:38.745819 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-cni-net-dir\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.746124 kubelet[2167]: I0910 00:41:38.745837 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-var-run-calico\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.746124 kubelet[2167]: I0910 00:41:38.745856 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-cni-bin-dir\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.746124 kubelet[2167]: I0910 00:41:38.745883 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/183c6919-8ddd-4eec-a6c8-00734b706821-var-lib-calico\") pod \"calico-node-f6cnz\" (UID: \"183c6919-8ddd-4eec-a6c8-00734b706821\") " pod="calico-system/calico-node-f6cnz" Sep 10 00:41:38.784809 env[1311]: time="2025-09-10T00:41:38.784734567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f7c8d66-khh45,Uid:48990a0e-6464-4ae4-95fe-e9dcd9a2dd93,Namespace:calico-system,Attempt:0,} returns sandbox id \"1022cb2a98a54f1c5fdfb3c07361771ce0feb2013e9e24bf46b5d1f4f38b5b20\"" Sep 10 00:41:38.785728 kubelet[2167]: E0910 00:41:38.785698 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:38.787032 env[1311]: time="2025-09-10T00:41:38.786990737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:41:38.849929 kubelet[2167]: E0910 00:41:38.849438 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:38.849929 kubelet[2167]: W0910 00:41:38.849474 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:38.849929 kubelet[2167]: E0910 00:41:38.849509 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:38.851307 kubelet[2167]: E0910 00:41:38.851257 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:38.851307 kubelet[2167]: W0910 00:41:38.851286 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:38.851553 kubelet[2167]: E0910 00:41:38.851348 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:38.858414 kubelet[2167]: E0910 00:41:38.858358 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:38.858414 kubelet[2167]: W0910 00:41:38.858385 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:38.858414 kubelet[2167]: E0910 00:41:38.858430 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.000611 kubelet[2167]: E0910 00:41:39.000549 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:39.013992 env[1311]: time="2025-09-10T00:41:39.013793831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6cnz,Uid:183c6919-8ddd-4eec-a6c8-00734b706821,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:39.036161 kubelet[2167]: E0910 00:41:39.035903 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.036161 kubelet[2167]: W0910 00:41:39.035935 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.036161 kubelet[2167]: E0910 00:41:39.035970 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.036483 kubelet[2167]: E0910 00:41:39.036298 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.036483 kubelet[2167]: W0910 00:41:39.036312 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.036483 kubelet[2167]: E0910 00:41:39.036347 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.036691 kubelet[2167]: E0910 00:41:39.036672 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.036691 kubelet[2167]: W0910 00:41:39.036688 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.036816 kubelet[2167]: E0910 00:41:39.036709 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.036962 kubelet[2167]: E0910 00:41:39.036947 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.036962 kubelet[2167]: W0910 00:41:39.036961 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.037060 kubelet[2167]: E0910 00:41:39.036972 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.037278 kubelet[2167]: E0910 00:41:39.037247 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.037278 kubelet[2167]: W0910 00:41:39.037262 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.037278 kubelet[2167]: E0910 00:41:39.037273 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.037528 kubelet[2167]: E0910 00:41:39.037513 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.037528 kubelet[2167]: W0910 00:41:39.037525 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.037664 kubelet[2167]: E0910 00:41:39.037536 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.037862 kubelet[2167]: E0910 00:41:39.037821 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.037949 kubelet[2167]: W0910 00:41:39.037861 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.037949 kubelet[2167]: E0910 00:41:39.037898 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.038210 kubelet[2167]: E0910 00:41:39.038170 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.038210 kubelet[2167]: W0910 00:41:39.038186 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.038210 kubelet[2167]: E0910 00:41:39.038199 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.038510 kubelet[2167]: E0910 00:41:39.038465 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.038510 kubelet[2167]: W0910 00:41:39.038492 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.038510 kubelet[2167]: E0910 00:41:39.038505 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.038722 kubelet[2167]: E0910 00:41:39.038690 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.038722 kubelet[2167]: W0910 00:41:39.038707 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.038722 kubelet[2167]: E0910 00:41:39.038723 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.039176 kubelet[2167]: E0910 00:41:39.039154 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.039176 kubelet[2167]: W0910 00:41:39.039168 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.039361 kubelet[2167]: E0910 00:41:39.039186 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.039490 kubelet[2167]: E0910 00:41:39.039468 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.039490 kubelet[2167]: W0910 00:41:39.039485 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.039490 kubelet[2167]: E0910 00:41:39.039503 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.039763 kubelet[2167]: E0910 00:41:39.039736 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.039763 kubelet[2167]: W0910 00:41:39.039749 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.039863 kubelet[2167]: E0910 00:41:39.039763 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.040013 kubelet[2167]: E0910 00:41:39.039998 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.040090 kubelet[2167]: W0910 00:41:39.040016 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.040090 kubelet[2167]: E0910 00:41:39.040034 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.040289 kubelet[2167]: E0910 00:41:39.040259 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.040289 kubelet[2167]: W0910 00:41:39.040273 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.040536 kubelet[2167]: E0910 00:41:39.040286 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.040587 kubelet[2167]: E0910 00:41:39.040540 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.040587 kubelet[2167]: W0910 00:41:39.040551 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.040587 kubelet[2167]: E0910 00:41:39.040562 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.040800 kubelet[2167]: E0910 00:41:39.040783 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.040800 kubelet[2167]: W0910 00:41:39.040799 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.040930 kubelet[2167]: E0910 00:41:39.040815 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.041032 kubelet[2167]: E0910 00:41:39.041013 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.041032 kubelet[2167]: W0910 00:41:39.041026 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.041149 kubelet[2167]: E0910 00:41:39.041043 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.041278 kubelet[2167]: E0910 00:41:39.041245 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.041278 kubelet[2167]: W0910 00:41:39.041259 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.041278 kubelet[2167]: E0910 00:41:39.041270 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.041566 kubelet[2167]: E0910 00:41:39.041536 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.041566 kubelet[2167]: W0910 00:41:39.041556 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.041651 kubelet[2167]: E0910 00:41:39.041573 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.042054 env[1311]: time="2025-09-10T00:41:39.041957325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:39.042054 env[1311]: time="2025-09-10T00:41:39.042027366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:39.042054 env[1311]: time="2025-09-10T00:41:39.042046362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:39.042344 env[1311]: time="2025-09-10T00:41:39.042281374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647 pid=2640 runtime=io.containerd.runc.v2 Sep 10 00:41:39.047955 kubelet[2167]: E0910 00:41:39.047747 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.047955 kubelet[2167]: W0910 00:41:39.047774 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.047955 kubelet[2167]: E0910 00:41:39.047797 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.047955 kubelet[2167]: I0910 00:41:39.047832 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc5j\" (UniqueName: \"kubernetes.io/projected/6e46e9e0-10bc-4c50-9705-59d1dee4c692-kube-api-access-7pc5j\") pod \"csi-node-driver-k7vf2\" (UID: \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\") " pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:39.048279 kubelet[2167]: E0910 00:41:39.048094 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.048279 kubelet[2167]: W0910 00:41:39.048109 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.048279 kubelet[2167]: E0910 00:41:39.048122 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.048279 kubelet[2167]: I0910 00:41:39.048140 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e46e9e0-10bc-4c50-9705-59d1dee4c692-kubelet-dir\") pod \"csi-node-driver-k7vf2\" (UID: \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\") " pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:39.048452 kubelet[2167]: E0910 00:41:39.048418 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.048452 kubelet[2167]: W0910 00:41:39.048432 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.048452 kubelet[2167]: E0910 00:41:39.048445 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.048565 kubelet[2167]: I0910 00:41:39.048463 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e46e9e0-10bc-4c50-9705-59d1dee4c692-socket-dir\") pod \"csi-node-driver-k7vf2\" (UID: \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\") " pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:39.048753 kubelet[2167]: E0910 00:41:39.048729 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.048753 kubelet[2167]: W0910 00:41:39.048748 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.048856 kubelet[2167]: E0910 00:41:39.048771 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.048856 kubelet[2167]: I0910 00:41:39.048794 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e46e9e0-10bc-4c50-9705-59d1dee4c692-registration-dir\") pod \"csi-node-driver-k7vf2\" (UID: \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\") " pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:39.049079 kubelet[2167]: E0910 00:41:39.049054 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.049079 kubelet[2167]: W0910 00:41:39.049073 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.049189 kubelet[2167]: E0910 00:41:39.049102 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.049189 kubelet[2167]: I0910 00:41:39.049132 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e46e9e0-10bc-4c50-9705-59d1dee4c692-varrun\") pod \"csi-node-driver-k7vf2\" (UID: \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\") " pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:39.049484 kubelet[2167]: E0910 00:41:39.049453 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.049484 kubelet[2167]: W0910 00:41:39.049469 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.049593 kubelet[2167]: E0910 00:41:39.049572 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.049711 kubelet[2167]: E0910 00:41:39.049690 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.049711 kubelet[2167]: W0910 00:41:39.049705 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.049830 kubelet[2167]: E0910 00:41:39.049813 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.050099 kubelet[2167]: E0910 00:41:39.050077 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.050099 kubelet[2167]: W0910 00:41:39.050097 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.050229 kubelet[2167]: E0910 00:41:39.050206 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.050446 kubelet[2167]: E0910 00:41:39.050410 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.050530 kubelet[2167]: W0910 00:41:39.050445 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.050623 kubelet[2167]: E0910 00:41:39.050593 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.050740 kubelet[2167]: E0910 00:41:39.050717 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.050740 kubelet[2167]: W0910 00:41:39.050734 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.050843 kubelet[2167]: E0910 00:41:39.050824 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.050985 kubelet[2167]: E0910 00:41:39.050964 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.050985 kubelet[2167]: W0910 00:41:39.050984 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.051098 kubelet[2167]: E0910 00:41:39.051004 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.051296 kubelet[2167]: E0910 00:41:39.051277 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.051296 kubelet[2167]: W0910 00:41:39.051293 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.051417 kubelet[2167]: E0910 00:41:39.051306 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.051549 kubelet[2167]: E0910 00:41:39.051530 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.051549 kubelet[2167]: W0910 00:41:39.051547 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.051616 kubelet[2167]: E0910 00:41:39.051559 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.051877 kubelet[2167]: E0910 00:41:39.051813 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.051877 kubelet[2167]: W0910 00:41:39.051835 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.051877 kubelet[2167]: E0910 00:41:39.051848 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.052113 kubelet[2167]: E0910 00:41:39.052084 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.052113 kubelet[2167]: W0910 00:41:39.052102 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.052113 kubelet[2167]: E0910 00:41:39.052115 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.098356 env[1311]: time="2025-09-10T00:41:39.098281088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6cnz,Uid:183c6919-8ddd-4eec-a6c8-00734b706821,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\"" Sep 10 00:41:39.150848 kubelet[2167]: E0910 00:41:39.150797 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.150848 kubelet[2167]: W0910 00:41:39.150827 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.150848 kubelet[2167]: E0910 00:41:39.150855 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.151181 kubelet[2167]: E0910 00:41:39.151161 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.151240 kubelet[2167]: W0910 00:41:39.151192 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.151240 kubelet[2167]: E0910 00:41:39.151212 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.151476 kubelet[2167]: E0910 00:41:39.151455 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.151476 kubelet[2167]: W0910 00:41:39.151471 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.151570 kubelet[2167]: E0910 00:41:39.151490 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.151751 kubelet[2167]: E0910 00:41:39.151731 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.151751 kubelet[2167]: W0910 00:41:39.151747 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.151831 kubelet[2167]: E0910 00:41:39.151767 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.152109 kubelet[2167]: E0910 00:41:39.152073 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.152159 kubelet[2167]: W0910 00:41:39.152111 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.152184 kubelet[2167]: E0910 00:41:39.152154 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.152686 kubelet[2167]: E0910 00:41:39.152643 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.152728 kubelet[2167]: W0910 00:41:39.152682 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.152728 kubelet[2167]: E0910 00:41:39.152717 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.153030 kubelet[2167]: E0910 00:41:39.153011 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.153030 kubelet[2167]: W0910 00:41:39.153022 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.153093 kubelet[2167]: E0910 00:41:39.153060 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.153247 kubelet[2167]: E0910 00:41:39.153230 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.153247 kubelet[2167]: W0910 00:41:39.153241 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.153310 kubelet[2167]: E0910 00:41:39.153291 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.153526 kubelet[2167]: E0910 00:41:39.153512 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.153554 kubelet[2167]: W0910 00:41:39.153526 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.153584 kubelet[2167]: E0910 00:41:39.153561 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.153752 kubelet[2167]: E0910 00:41:39.153738 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.153752 kubelet[2167]: W0910 00:41:39.153748 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.153828 kubelet[2167]: E0910 00:41:39.153777 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.153966 kubelet[2167]: E0910 00:41:39.153938 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.153966 kubelet[2167]: W0910 00:41:39.153952 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.154037 kubelet[2167]: E0910 00:41:39.153992 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.154168 kubelet[2167]: E0910 00:41:39.154155 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.154168 kubelet[2167]: W0910 00:41:39.154166 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.154229 kubelet[2167]: E0910 00:41:39.154181 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.154496 kubelet[2167]: E0910 00:41:39.154480 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.154496 kubelet[2167]: W0910 00:41:39.154492 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.154573 kubelet[2167]: E0910 00:41:39.154512 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.154734 kubelet[2167]: E0910 00:41:39.154720 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.154734 kubelet[2167]: W0910 00:41:39.154731 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.154813 kubelet[2167]: E0910 00:41:39.154744 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.154938 kubelet[2167]: E0910 00:41:39.154923 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.154938 kubelet[2167]: W0910 00:41:39.154934 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.155002 kubelet[2167]: E0910 00:41:39.154969 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.155146 kubelet[2167]: E0910 00:41:39.155135 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.155175 kubelet[2167]: W0910 00:41:39.155146 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.155200 kubelet[2167]: E0910 00:41:39.155180 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.155395 kubelet[2167]: E0910 00:41:39.155374 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.155422 kubelet[2167]: W0910 00:41:39.155398 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.155451 kubelet[2167]: E0910 00:41:39.155435 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.155607 kubelet[2167]: E0910 00:41:39.155596 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.155635 kubelet[2167]: W0910 00:41:39.155607 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.155660 kubelet[2167]: E0910 00:41:39.155641 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.155828 kubelet[2167]: E0910 00:41:39.155812 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.155828 kubelet[2167]: W0910 00:41:39.155823 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.155904 kubelet[2167]: E0910 00:41:39.155838 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.156047 kubelet[2167]: E0910 00:41:39.156034 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.156047 kubelet[2167]: W0910 00:41:39.156045 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.156099 kubelet[2167]: E0910 00:41:39.156061 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.156256 kubelet[2167]: E0910 00:41:39.156243 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.156256 kubelet[2167]: W0910 00:41:39.156253 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.156362 kubelet[2167]: E0910 00:41:39.156268 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.156569 kubelet[2167]: E0910 00:41:39.156547 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.156604 kubelet[2167]: W0910 00:41:39.156569 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.156604 kubelet[2167]: E0910 00:41:39.156591 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.156908 kubelet[2167]: E0910 00:41:39.156882 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.156908 kubelet[2167]: W0910 00:41:39.156900 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.156968 kubelet[2167]: E0910 00:41:39.156941 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.157138 kubelet[2167]: E0910 00:41:39.157114 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.157138 kubelet[2167]: W0910 00:41:39.157131 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.157199 kubelet[2167]: E0910 00:41:39.157143 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.157415 kubelet[2167]: E0910 00:41:39.157399 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.157449 kubelet[2167]: W0910 00:41:39.157415 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.157449 kubelet[2167]: E0910 00:41:39.157428 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.168134 kubelet[2167]: E0910 00:41:39.168092 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:39.168134 kubelet[2167]: W0910 00:41:39.168123 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:39.168398 kubelet[2167]: E0910 00:41:39.168149 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:39.228000 audit[2729]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:39.228000 audit[2729]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc9fe56970 a2=0 a3=7ffc9fe5695c items=0 ppid=2276 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:39.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:39.234000 audit[2729]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:39.234000 audit[2729]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9fe56970 a2=0 a3=0 items=0 ppid=2276 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:39.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:40.548167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216531462.mount: Deactivated successfully. Sep 10 00:41:40.900791 kubelet[2167]: E0910 00:41:40.900638 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:41.370214 env[1311]: time="2025-09-10T00:41:41.370126406Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:41.372635 env[1311]: time="2025-09-10T00:41:41.372586367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:41.374268 env[1311]: time="2025-09-10T00:41:41.374241467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:41.376164 env[1311]: time="2025-09-10T00:41:41.376095630Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:41.376486 env[1311]: time="2025-09-10T00:41:41.376445868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 00:41:41.377754 env[1311]: time="2025-09-10T00:41:41.377712849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:41:41.385658 env[1311]: time="2025-09-10T00:41:41.384759768Z" level=info msg="CreateContainer within sandbox \"1022cb2a98a54f1c5fdfb3c07361771ce0feb2013e9e24bf46b5d1f4f38b5b20\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:41:41.400128 env[1311]: time="2025-09-10T00:41:41.400054621Z" level=info msg="CreateContainer within sandbox \"1022cb2a98a54f1c5fdfb3c07361771ce0feb2013e9e24bf46b5d1f4f38b5b20\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ca52feb6a9806a20135714519153363ad5ab9fabdc35633da7361f9fcee89eb\"" Sep 10 00:41:41.400809 env[1311]: time="2025-09-10T00:41:41.400630363Z" level=info msg="StartContainer for \"9ca52feb6a9806a20135714519153363ad5ab9fabdc35633da7361f9fcee89eb\"" Sep 10 00:41:41.467920 env[1311]: time="2025-09-10T00:41:41.467839047Z" level=info msg="StartContainer for \"9ca52feb6a9806a20135714519153363ad5ab9fabdc35633da7361f9fcee89eb\" returns successfully" Sep 10 00:41:41.962455 kubelet[2167]: E0910 00:41:41.962379 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:41.974885 kubelet[2167]: I0910 00:41:41.974798 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66f7c8d66-khh45" podStartSLOduration=1.383998169 podStartE2EDuration="3.97477809s" podCreationTimestamp="2025-09-10 00:41:38 +0000 UTC" firstStartedPulling="2025-09-10 00:41:38.786569896 +0000 UTC m=+21.975477998" lastFinishedPulling="2025-09-10 00:41:41.377349827 +0000 UTC m=+24.566257919" observedRunningTime="2025-09-10 00:41:41.974453329 +0000 UTC m=+25.163361451" watchObservedRunningTime="2025-09-10 00:41:41.97477809 +0000 UTC m=+25.163686182" Sep 10 00:41:42.003000 audit[2778]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:42.009382 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 10 00:41:42.009545 kernel: audit: type=1325 audit(1757464902.003:281): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:42.009590 kernel: audit: type=1300 audit(1757464902.003:281): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc30724370 a2=0 a3=7ffc3072435c items=0 ppid=2276 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:42.003000 audit[2778]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc30724370 a2=0 a3=7ffc3072435c items=0 ppid=2276 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:42.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:42.018352 kernel: audit: type=1327 audit(1757464902.003:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:42.018000 audit[2778]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:42.018000 audit[2778]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc30724370 a2=0 a3=7ffc3072435c items=0 ppid=2276 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:42.027295 kernel: audit: type=1325 audit(1757464902.018:282): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:41:42.027465 kernel: audit: type=1300 audit(1757464902.018:282): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc30724370 a2=0 a3=7ffc3072435c items=0 ppid=2276 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:41:42.027489 kernel: audit: type=1327 audit(1757464902.018:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:42.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:41:42.062117 kubelet[2167]: E0910 00:41:42.062073 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.062117 kubelet[2167]: W0910 00:41:42.062108 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.062403 kubelet[2167]: E0910 00:41:42.062140 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.062452 kubelet[2167]: E0910 00:41:42.062440 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.062489 kubelet[2167]: W0910 00:41:42.062451 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.062489 kubelet[2167]: E0910 00:41:42.062462 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.062694 kubelet[2167]: E0910 00:41:42.062672 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.062694 kubelet[2167]: W0910 00:41:42.062689 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.062820 kubelet[2167]: E0910 00:41:42.062703 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.062940 kubelet[2167]: E0910 00:41:42.062923 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.062940 kubelet[2167]: W0910 00:41:42.062935 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.063033 kubelet[2167]: E0910 00:41:42.062947 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.063272 kubelet[2167]: E0910 00:41:42.063253 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.063272 kubelet[2167]: W0910 00:41:42.063267 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.063427 kubelet[2167]: E0910 00:41:42.063285 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.063525 kubelet[2167]: E0910 00:41:42.063508 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.063525 kubelet[2167]: W0910 00:41:42.063520 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.063613 kubelet[2167]: E0910 00:41:42.063531 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.063766 kubelet[2167]: E0910 00:41:42.063748 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.063766 kubelet[2167]: W0910 00:41:42.063760 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.063893 kubelet[2167]: E0910 00:41:42.063771 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.063976 kubelet[2167]: E0910 00:41:42.063959 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.063976 kubelet[2167]: W0910 00:41:42.063975 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.064144 kubelet[2167]: E0910 00:41:42.063986 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.064245 kubelet[2167]: E0910 00:41:42.064223 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.064245 kubelet[2167]: W0910 00:41:42.064243 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.064396 kubelet[2167]: E0910 00:41:42.064261 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.064547 kubelet[2167]: E0910 00:41:42.064527 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.064547 kubelet[2167]: W0910 00:41:42.064540 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.064683 kubelet[2167]: E0910 00:41:42.064558 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.064861 kubelet[2167]: E0910 00:41:42.064842 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.064861 kubelet[2167]: W0910 00:41:42.064860 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.064990 kubelet[2167]: E0910 00:41:42.064876 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.065091 kubelet[2167]: E0910 00:41:42.065075 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.065091 kubelet[2167]: W0910 00:41:42.065090 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.065208 kubelet[2167]: E0910 00:41:42.065102 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.065385 kubelet[2167]: E0910 00:41:42.065369 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.065385 kubelet[2167]: W0910 00:41:42.065382 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.065504 kubelet[2167]: E0910 00:41:42.065393 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.065579 kubelet[2167]: E0910 00:41:42.065565 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.065579 kubelet[2167]: W0910 00:41:42.065577 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.065650 kubelet[2167]: E0910 00:41:42.065586 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.065764 kubelet[2167]: E0910 00:41:42.065747 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.065764 kubelet[2167]: W0910 00:41:42.065759 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.065867 kubelet[2167]: E0910 00:41:42.065773 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.073288 kubelet[2167]: E0910 00:41:42.073252 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.073288 kubelet[2167]: W0910 00:41:42.073277 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.073288 kubelet[2167]: E0910 00:41:42.073297 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.073591 kubelet[2167]: E0910 00:41:42.073539 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.073591 kubelet[2167]: W0910 00:41:42.073551 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.073591 kubelet[2167]: E0910 00:41:42.073570 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.073800 kubelet[2167]: E0910 00:41:42.073782 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.073800 kubelet[2167]: W0910 00:41:42.073794 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.073891 kubelet[2167]: E0910 00:41:42.073810 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.074025 kubelet[2167]: E0910 00:41:42.074009 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.074025 kubelet[2167]: W0910 00:41:42.074021 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.074121 kubelet[2167]: E0910 00:41:42.074037 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.074206 kubelet[2167]: E0910 00:41:42.074190 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.074206 kubelet[2167]: W0910 00:41:42.074202 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.074294 kubelet[2167]: E0910 00:41:42.074217 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.074419 kubelet[2167]: E0910 00:41:42.074403 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.074419 kubelet[2167]: W0910 00:41:42.074415 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.074522 kubelet[2167]: E0910 00:41:42.074431 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.074655 kubelet[2167]: E0910 00:41:42.074638 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.074655 kubelet[2167]: W0910 00:41:42.074652 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.074756 kubelet[2167]: E0910 00:41:42.074683 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.074886 kubelet[2167]: E0910 00:41:42.074863 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.074886 kubelet[2167]: W0910 00:41:42.074878 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.074996 kubelet[2167]: E0910 00:41:42.074919 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.075101 kubelet[2167]: E0910 00:41:42.075083 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.075101 kubelet[2167]: W0910 00:41:42.075097 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.075196 kubelet[2167]: E0910 00:41:42.075112 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.075357 kubelet[2167]: E0910 00:41:42.075306 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.075357 kubelet[2167]: W0910 00:41:42.075345 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.075460 kubelet[2167]: E0910 00:41:42.075366 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.075584 kubelet[2167]: E0910 00:41:42.075567 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.075584 kubelet[2167]: W0910 00:41:42.075582 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.075659 kubelet[2167]: E0910 00:41:42.075596 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.075842 kubelet[2167]: E0910 00:41:42.075823 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.075842 kubelet[2167]: W0910 00:41:42.075837 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.075949 kubelet[2167]: E0910 00:41:42.075851 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.076094 kubelet[2167]: E0910 00:41:42.076055 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.076094 kubelet[2167]: W0910 00:41:42.076069 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.076186 kubelet[2167]: E0910 00:41:42.076094 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.076305 kubelet[2167]: E0910 00:41:42.076287 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.076305 kubelet[2167]: W0910 00:41:42.076299 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.076441 kubelet[2167]: E0910 00:41:42.076346 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.076538 kubelet[2167]: E0910 00:41:42.076520 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.076538 kubelet[2167]: W0910 00:41:42.076532 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.076637 kubelet[2167]: E0910 00:41:42.076555 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.076756 kubelet[2167]: E0910 00:41:42.076737 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.076756 kubelet[2167]: W0910 00:41:42.076749 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.076853 kubelet[2167]: E0910 00:41:42.076771 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.077029 kubelet[2167]: E0910 00:41:42.077000 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.077029 kubelet[2167]: W0910 00:41:42.077016 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.077029 kubelet[2167]: E0910 00:41:42.077027 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.077221 kubelet[2167]: E0910 00:41:42.077201 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.077221 kubelet[2167]: W0910 00:41:42.077214 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.077315 kubelet[2167]: E0910 00:41:42.077231 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.901079 kubelet[2167]: E0910 00:41:42.900991 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:42.964564 kubelet[2167]: E0910 00:41:42.964494 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:42.971629 kubelet[2167]: E0910 00:41:42.971567 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.971629 kubelet[2167]: W0910 00:41:42.971603 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.971629 kubelet[2167]: E0910 00:41:42.971632 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.971894 kubelet[2167]: E0910 00:41:42.971837 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.971894 kubelet[2167]: W0910 00:41:42.971849 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.971894 kubelet[2167]: E0910 00:41:42.971861 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.972086 kubelet[2167]: E0910 00:41:42.972056 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.972086 kubelet[2167]: W0910 00:41:42.972071 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.972086 kubelet[2167]: E0910 00:41:42.972083 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.972308 kubelet[2167]: E0910 00:41:42.972280 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.972308 kubelet[2167]: W0910 00:41:42.972294 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.972392 kubelet[2167]: E0910 00:41:42.972318 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.972553 kubelet[2167]: E0910 00:41:42.972527 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.972553 kubelet[2167]: W0910 00:41:42.972541 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.972607 kubelet[2167]: E0910 00:41:42.972552 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.972735 kubelet[2167]: E0910 00:41:42.972715 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.972735 kubelet[2167]: W0910 00:41:42.972731 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.972789 kubelet[2167]: E0910 00:41:42.972744 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.972934 kubelet[2167]: E0910 00:41:42.972916 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.972934 kubelet[2167]: W0910 00:41:42.972930 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.973004 kubelet[2167]: E0910 00:41:42.972942 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.973132 kubelet[2167]: E0910 00:41:42.973119 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.973161 kubelet[2167]: W0910 00:41:42.973132 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.973161 kubelet[2167]: E0910 00:41:42.973144 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.973388 kubelet[2167]: E0910 00:41:42.973374 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.973388 kubelet[2167]: W0910 00:41:42.973387 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.973466 kubelet[2167]: E0910 00:41:42.973397 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.973582 kubelet[2167]: E0910 00:41:42.973569 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.973617 kubelet[2167]: W0910 00:41:42.973582 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.973617 kubelet[2167]: E0910 00:41:42.973593 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.973781 kubelet[2167]: E0910 00:41:42.973769 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.973815 kubelet[2167]: W0910 00:41:42.973782 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.973815 kubelet[2167]: E0910 00:41:42.973793 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.973987 kubelet[2167]: E0910 00:41:42.973972 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.974024 kubelet[2167]: W0910 00:41:42.973986 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.974024 kubelet[2167]: E0910 00:41:42.973998 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.974197 kubelet[2167]: E0910 00:41:42.974184 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.974231 kubelet[2167]: W0910 00:41:42.974198 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.974231 kubelet[2167]: E0910 00:41:42.974209 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.974425 kubelet[2167]: E0910 00:41:42.974411 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.974425 kubelet[2167]: W0910 00:41:42.974424 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.974510 kubelet[2167]: E0910 00:41:42.974435 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.975177 kubelet[2167]: E0910 00:41:42.975155 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.975177 kubelet[2167]: W0910 00:41:42.975176 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.975285 kubelet[2167]: E0910 00:41:42.975190 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.979914 kubelet[2167]: E0910 00:41:42.979873 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.979914 kubelet[2167]: W0910 00:41:42.979904 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.980123 kubelet[2167]: E0910 00:41:42.979928 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.980268 kubelet[2167]: E0910 00:41:42.980242 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.980268 kubelet[2167]: W0910 00:41:42.980259 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.980379 kubelet[2167]: E0910 00:41:42.980280 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.980650 kubelet[2167]: E0910 00:41:42.980606 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.980650 kubelet[2167]: W0910 00:41:42.980639 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.980724 kubelet[2167]: E0910 00:41:42.980669 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.980900 kubelet[2167]: E0910 00:41:42.980870 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.980900 kubelet[2167]: W0910 00:41:42.980881 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.980900 kubelet[2167]: E0910 00:41:42.980894 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.981109 kubelet[2167]: E0910 00:41:42.981077 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.981109 kubelet[2167]: W0910 00:41:42.981094 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.981109 kubelet[2167]: E0910 00:41:42.981105 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.981344 kubelet[2167]: E0910 00:41:42.981295 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.981344 kubelet[2167]: W0910 00:41:42.981317 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.981429 kubelet[2167]: E0910 00:41:42.981352 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.981639 kubelet[2167]: E0910 00:41:42.981621 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.981639 kubelet[2167]: W0910 00:41:42.981637 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.981716 kubelet[2167]: E0910 00:41:42.981657 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.981933 kubelet[2167]: E0910 00:41:42.981908 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.981933 kubelet[2167]: W0910 00:41:42.981924 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.982012 kubelet[2167]: E0910 00:41:42.981957 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.982141 kubelet[2167]: E0910 00:41:42.982124 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.982141 kubelet[2167]: W0910 00:41:42.982138 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.982221 kubelet[2167]: E0910 00:41:42.982168 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.982456 kubelet[2167]: E0910 00:41:42.982437 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.982456 kubelet[2167]: W0910 00:41:42.982454 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.982555 kubelet[2167]: E0910 00:41:42.982473 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.982710 kubelet[2167]: E0910 00:41:42.982694 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.982710 kubelet[2167]: W0910 00:41:42.982708 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.982786 kubelet[2167]: E0910 00:41:42.982725 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.982919 kubelet[2167]: E0910 00:41:42.982903 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.982919 kubelet[2167]: W0910 00:41:42.982917 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.982996 kubelet[2167]: E0910 00:41:42.982934 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.983162 kubelet[2167]: E0910 00:41:42.983146 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.983162 kubelet[2167]: W0910 00:41:42.983160 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.983231 kubelet[2167]: E0910 00:41:42.983177 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.983485 kubelet[2167]: E0910 00:41:42.983466 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.983559 kubelet[2167]: W0910 00:41:42.983483 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.983559 kubelet[2167]: E0910 00:41:42.983510 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.983748 kubelet[2167]: E0910 00:41:42.983731 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.983748 kubelet[2167]: W0910 00:41:42.983742 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.983834 kubelet[2167]: E0910 00:41:42.983759 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.983957 kubelet[2167]: E0910 00:41:42.983941 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.983957 kubelet[2167]: W0910 00:41:42.983956 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.984040 kubelet[2167]: E0910 00:41:42.983973 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.984224 kubelet[2167]: E0910 00:41:42.984207 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.984224 kubelet[2167]: W0910 00:41:42.984221 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.984296 kubelet[2167]: E0910 00:41:42.984233 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:42.984893 kubelet[2167]: E0910 00:41:42.984876 2167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:42.984893 kubelet[2167]: W0910 00:41:42.984891 2167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:42.984990 kubelet[2167]: E0910 00:41:42.984902 2167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:43.023011 env[1311]: time="2025-09-10T00:41:43.022945956Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:43.026474 env[1311]: time="2025-09-10T00:41:43.026388943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:43.028642 env[1311]: time="2025-09-10T00:41:43.028599164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:43.030464 env[1311]: time="2025-09-10T00:41:43.030368568Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:43.031018 env[1311]: time="2025-09-10T00:41:43.030948477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 00:41:43.035692 env[1311]: time="2025-09-10T00:41:43.035625662Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:41:43.054059 env[1311]: time="2025-09-10T00:41:43.053981858Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e\"" Sep 10 00:41:43.055020 env[1311]: time="2025-09-10T00:41:43.054955397Z" level=info msg="StartContainer for \"8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e\"" Sep 10 00:41:43.126440 env[1311]: time="2025-09-10T00:41:43.126359881Z" level=info msg="StartContainer for \"8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e\" returns successfully" Sep 10 00:41:43.173571 env[1311]: time="2025-09-10T00:41:43.173379344Z" level=info msg="shim disconnected" id=8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e Sep 10 00:41:43.173571 env[1311]: time="2025-09-10T00:41:43.173446090Z" level=warning msg="cleaning up after shim disconnected" id=8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e namespace=k8s.io Sep 10 00:41:43.173571 env[1311]: time="2025-09-10T00:41:43.173460146Z" level=info msg="cleaning up dead shim" Sep 10 00:41:43.181850 env[1311]: time="2025-09-10T00:41:43.181758954Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:41:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n" Sep 10 00:41:43.382471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db75143edce4e687b6317e0bf164b7b3a68980b0bc30ea66b93f3af9b1a228e-rootfs.mount: Deactivated successfully. Sep 10 00:41:43.967079 kubelet[2167]: E0910 00:41:43.967032 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:43.968388 env[1311]: time="2025-09-10T00:41:43.968359700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:41:44.900297 kubelet[2167]: E0910 00:41:44.900209 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:46.901125 kubelet[2167]: E0910 00:41:46.901046 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:48.134199 env[1311]: time="2025-09-10T00:41:48.134114697Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:48.136566 env[1311]: time="2025-09-10T00:41:48.136522727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:48.138251 env[1311]: time="2025-09-10T00:41:48.138209153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:48.139894 env[1311]: time="2025-09-10T00:41:48.139850354Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:48.140505 env[1311]: time="2025-09-10T00:41:48.140451604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 00:41:48.142694 env[1311]: time="2025-09-10T00:41:48.142647616Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:41:48.167388 env[1311]: time="2025-09-10T00:41:48.167336486Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa\"" Sep 10 00:41:48.168831 env[1311]: time="2025-09-10T00:41:48.168776920Z" level=info msg="StartContainer for \"d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa\"" Sep 10 00:41:48.444116 env[1311]: time="2025-09-10T00:41:48.444046273Z" level=info msg="StartContainer for \"d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa\" returns successfully" Sep 10 00:41:49.153313 kubelet[2167]: E0910 00:41:49.153236 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:50.390102 env[1311]: time="2025-09-10T00:41:50.390018806Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:41:50.408924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa-rootfs.mount: Deactivated successfully. Sep 10 00:41:50.411308 env[1311]: time="2025-09-10T00:41:50.411258505Z" level=info msg="shim disconnected" id=d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa Sep 10 00:41:50.411445 env[1311]: time="2025-09-10T00:41:50.411313969Z" level=warning msg="cleaning up after shim disconnected" id=d050252ca22391f2a2c798374aeb4ba457795d7ce5965e6428728f9ab2bd63aa namespace=k8s.io Sep 10 00:41:50.411445 env[1311]: time="2025-09-10T00:41:50.411346249Z" level=info msg="cleaning up dead shim" Sep 10 00:41:50.419116 env[1311]: time="2025-09-10T00:41:50.419060950Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:41:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2961 runtime=io.containerd.runc.v2\n" Sep 10 00:41:50.421747 kubelet[2167]: I0910 00:41:50.421683 2167 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:41:50.837537 kubelet[2167]: I0910 00:41:50.837471 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe-calico-apiserver-certs\") pod \"calico-apiserver-c88bffbdf-qnnws\" (UID: \"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe\") " pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" Sep 10 00:41:50.837537 kubelet[2167]: I0910 00:41:50.837519 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk49\" (UniqueName: \"kubernetes.io/projected/ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe-kube-api-access-rsk49\") pod \"calico-apiserver-c88bffbdf-qnnws\" (UID: \"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe\") " pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" Sep 10 00:41:50.837537 kubelet[2167]: I0910 00:41:50.837541 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7bx\" (UniqueName: \"kubernetes.io/projected/cf3b90c9-768d-48d6-a148-e6a622704a6d-kube-api-access-2d7bx\") pod \"coredns-7c65d6cfc9-td5ft\" (UID: \"cf3b90c9-768d-48d6-a148-e6a622704a6d\") " pod="kube-system/coredns-7c65d6cfc9-td5ft" Sep 10 00:41:50.838677 kubelet[2167]: I0910 00:41:50.837566 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-ca-bundle\") pod \"whisker-74dc4f84bd-6t868\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " pod="calico-system/whisker-74dc4f84bd-6t868" Sep 10 00:41:50.838677 kubelet[2167]: I0910 00:41:50.837587 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9gz6\" (UniqueName: \"kubernetes.io/projected/ffbcf960-312a-4e1c-84c9-bb7a1a2c101f-kube-api-access-b9gz6\") pod \"coredns-7c65d6cfc9-dxvf6\" (UID: \"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f\") " pod="kube-system/coredns-7c65d6cfc9-dxvf6" Sep 10 00:41:50.838677 kubelet[2167]: I0910 00:41:50.837611 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgmjz\" (UniqueName: \"kubernetes.io/projected/8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4-kube-api-access-zgmjz\") pod \"calico-kube-controllers-6dc78c4547-tl96q\" (UID: \"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4\") " pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" Sep 10 00:41:50.838677 kubelet[2167]: I0910 00:41:50.837637 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0bb760-61ca-4fc9-a88d-45f47a6eb434-config\") pod \"goldmane-7988f88666-9rdtv\" (UID: \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\") " pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:50.838677 kubelet[2167]: I0910 00:41:50.837665 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c0bb760-61ca-4fc9-a88d-45f47a6eb434-goldmane-ca-bundle\") pod \"goldmane-7988f88666-9rdtv\" (UID: \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\") " pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:50.839075 kubelet[2167]: I0910 00:41:50.837692 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7c0bb760-61ca-4fc9-a88d-45f47a6eb434-goldmane-key-pair\") pod \"goldmane-7988f88666-9rdtv\" (UID: \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\") " pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:50.839075 kubelet[2167]: I0910 00:41:50.837717 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-backend-key-pair\") pod \"whisker-74dc4f84bd-6t868\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " pod="calico-system/whisker-74dc4f84bd-6t868" Sep 10 00:41:50.839075 kubelet[2167]: I0910 00:41:50.837743 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc78c4547-tl96q\" (UID: \"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4\") " pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" Sep 10 00:41:50.839075 kubelet[2167]: I0910 00:41:50.837768 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d9b8a9b-0a8b-44fd-b257-93a929c46e2c-calico-apiserver-certs\") pod \"calico-apiserver-c88bffbdf-mdkqg\" (UID: \"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c\") " pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" Sep 10 00:41:50.839075 kubelet[2167]: I0910 00:41:50.837788 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqnc\" (UniqueName: \"kubernetes.io/projected/7c0bb760-61ca-4fc9-a88d-45f47a6eb434-kube-api-access-xrqnc\") pod \"goldmane-7988f88666-9rdtv\" (UID: \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\") " pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:50.839253 kubelet[2167]: I0910 00:41:50.837809 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrfm\" (UniqueName: \"kubernetes.io/projected/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-kube-api-access-pbrfm\") pod \"whisker-74dc4f84bd-6t868\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " pod="calico-system/whisker-74dc4f84bd-6t868" Sep 10 00:41:50.839253 kubelet[2167]: I0910 00:41:50.837831 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffbcf960-312a-4e1c-84c9-bb7a1a2c101f-config-volume\") pod \"coredns-7c65d6cfc9-dxvf6\" (UID: \"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f\") " pod="kube-system/coredns-7c65d6cfc9-dxvf6" Sep 10 00:41:50.839253 kubelet[2167]: I0910 00:41:50.837853 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf3b90c9-768d-48d6-a148-e6a622704a6d-config-volume\") pod \"coredns-7c65d6cfc9-td5ft\" (UID: \"cf3b90c9-768d-48d6-a148-e6a622704a6d\") " pod="kube-system/coredns-7c65d6cfc9-td5ft" Sep 10 00:41:50.839253 kubelet[2167]: I0910 00:41:50.837888 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgfnk\" (UniqueName: \"kubernetes.io/projected/7d9b8a9b-0a8b-44fd-b257-93a929c46e2c-kube-api-access-sgfnk\") pod \"calico-apiserver-c88bffbdf-mdkqg\" (UID: \"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c\") " pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" Sep 10 00:41:50.903696 env[1311]: time="2025-09-10T00:41:50.903640353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k7vf2,Uid:6e46e9e0-10bc-4c50-9705-59d1dee4c692,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:51.162807 env[1311]: time="2025-09-10T00:41:51.162671527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:41:51.311549 env[1311]: time="2025-09-10T00:41:51.311455805Z" level=error msg="Failed to destroy network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:51.313903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86-shm.mount: Deactivated successfully. Sep 10 00:41:51.359087 env[1311]: time="2025-09-10T00:41:51.358979242Z" level=error msg="encountered an error cleaning up failed sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:51.359087 env[1311]: time="2025-09-10T00:41:51.359085512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k7vf2,Uid:6e46e9e0-10bc-4c50-9705-59d1dee4c692,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:51.359434 kubelet[2167]: E0910 00:41:51.359386 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:51.359543 kubelet[2167]: E0910 00:41:51.359457 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:51.359543 kubelet[2167]: E0910 00:41:51.359495 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k7vf2" Sep 10 00:41:51.359642 kubelet[2167]: E0910 00:41:51.359551 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k7vf2_calico-system(6e46e9e0-10bc-4c50-9705-59d1dee4c692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k7vf2_calico-system(6e46e9e0-10bc-4c50-9705-59d1dee4c692)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:51.572755 kubelet[2167]: E0910 00:41:51.572716 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:51.573309 env[1311]: time="2025-09-10T00:41:51.573261945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxvf6,Uid:ffbcf960-312a-4e1c-84c9-bb7a1a2c101f,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:51.581339 env[1311]: time="2025-09-10T00:41:51.581300342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-qnnws,Uid:ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:41:51.592823 env[1311]: time="2025-09-10T00:41:51.592794757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9rdtv,Uid:7c0bb760-61ca-4fc9-a88d-45f47a6eb434,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:51.594026 kubelet[2167]: E0910 00:41:51.594000 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:51.594270 env[1311]: time="2025-09-10T00:41:51.594244959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-td5ft,Uid:cf3b90c9-768d-48d6-a148-e6a622704a6d,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:51.596834 env[1311]: time="2025-09-10T00:41:51.596806987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74dc4f84bd-6t868,Uid:f7c3f841-9e91-4b9c-998c-ca1e02e1d983,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:51.598269 env[1311]: time="2025-09-10T00:41:51.598219970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-mdkqg,Uid:7d9b8a9b-0a8b-44fd-b257-93a929c46e2c,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:41:51.598269 env[1311]: time="2025-09-10T00:41:51.598238154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc78c4547-tl96q,Uid:8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:52.164780 kubelet[2167]: I0910 00:41:52.164739 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:41:52.165453 env[1311]: time="2025-09-10T00:41:52.165401819Z" level=info msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" Sep 10 00:41:52.188911 env[1311]: time="2025-09-10T00:41:52.188839407Z" level=error msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" failed" error="failed to destroy network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:52.189141 kubelet[2167]: E0910 00:41:52.189101 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:41:52.189246 kubelet[2167]: E0910 00:41:52.189159 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86"} Sep 10 00:41:52.189246 kubelet[2167]: E0910 00:41:52.189219 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:52.189369 kubelet[2167]: E0910 00:41:52.189248 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e46e9e0-10bc-4c50-9705-59d1dee4c692\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k7vf2" podUID="6e46e9e0-10bc-4c50-9705-59d1dee4c692" Sep 10 00:41:53.462950 env[1311]: time="2025-09-10T00:41:53.462887426Z" level=error msg="Failed to destroy network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.463799 env[1311]: time="2025-09-10T00:41:53.463769432Z" level=error msg="encountered an error cleaning up failed sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.463939 env[1311]: time="2025-09-10T00:41:53.463905297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-mdkqg,Uid:7d9b8a9b-0a8b-44fd-b257-93a929c46e2c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.464698 kubelet[2167]: E0910 00:41:53.464292 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.464698 kubelet[2167]: E0910 00:41:53.464379 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" Sep 10 00:41:53.464698 kubelet[2167]: E0910 00:41:53.464402 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" Sep 10 00:41:53.465074 kubelet[2167]: E0910 00:41:53.464450 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c88bffbdf-mdkqg_calico-apiserver(7d9b8a9b-0a8b-44fd-b257-93a929c46e2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c88bffbdf-mdkqg_calico-apiserver(7d9b8a9b-0a8b-44fd-b257-93a929c46e2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" podUID="7d9b8a9b-0a8b-44fd-b257-93a929c46e2c" Sep 10 00:41:53.467988 env[1311]: time="2025-09-10T00:41:53.467942754Z" level=error msg="Failed to destroy network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.468515 env[1311]: time="2025-09-10T00:41:53.468478029Z" level=error msg="encountered an error cleaning up failed sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.468663 env[1311]: time="2025-09-10T00:41:53.468624654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-qnnws,Uid:ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.469040 kubelet[2167]: E0910 00:41:53.468987 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.469117 kubelet[2167]: E0910 00:41:53.469068 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" Sep 10 00:41:53.469117 kubelet[2167]: E0910 00:41:53.469088 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" Sep 10 00:41:53.469177 kubelet[2167]: E0910 00:41:53.469127 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c88bffbdf-qnnws_calico-apiserver(ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c88bffbdf-qnnws_calico-apiserver(ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" podUID="ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe" Sep 10 00:41:53.480628 env[1311]: time="2025-09-10T00:41:53.480568558Z" level=error msg="Failed to destroy network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.481159 env[1311]: time="2025-09-10T00:41:53.481130914Z" level=error msg="encountered an error cleaning up failed sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.481296 env[1311]: time="2025-09-10T00:41:53.481263343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxvf6,Uid:ffbcf960-312a-4e1c-84c9-bb7a1a2c101f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.481666 kubelet[2167]: E0910 00:41:53.481616 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.481756 kubelet[2167]: E0910 00:41:53.481685 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dxvf6" Sep 10 00:41:53.481756 kubelet[2167]: E0910 00:41:53.481705 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dxvf6" Sep 10 00:41:53.481756 kubelet[2167]: E0910 00:41:53.481741 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dxvf6_kube-system(ffbcf960-312a-4e1c-84c9-bb7a1a2c101f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dxvf6_kube-system(ffbcf960-312a-4e1c-84c9-bb7a1a2c101f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dxvf6" podUID="ffbcf960-312a-4e1c-84c9-bb7a1a2c101f" Sep 10 00:41:53.499637 env[1311]: time="2025-09-10T00:41:53.499570790Z" level=error msg="Failed to destroy network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.500201 env[1311]: time="2025-09-10T00:41:53.500170616Z" level=error msg="encountered an error cleaning up failed sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.500347 env[1311]: time="2025-09-10T00:41:53.500292485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-td5ft,Uid:cf3b90c9-768d-48d6-a148-e6a622704a6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.500710 kubelet[2167]: E0910 00:41:53.500661 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.500788 kubelet[2167]: E0910 00:41:53.500734 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-td5ft" Sep 10 00:41:53.500788 kubelet[2167]: E0910 00:41:53.500756 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-td5ft" Sep 10 00:41:53.500850 kubelet[2167]: E0910 00:41:53.500802 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-td5ft_kube-system(cf3b90c9-768d-48d6-a148-e6a622704a6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-td5ft_kube-system(cf3b90c9-768d-48d6-a148-e6a622704a6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-td5ft" podUID="cf3b90c9-768d-48d6-a148-e6a622704a6d" Sep 10 00:41:53.513419 env[1311]: time="2025-09-10T00:41:53.513350370Z" level=error msg="Failed to destroy network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.513755 env[1311]: time="2025-09-10T00:41:53.513716317Z" level=error msg="encountered an error cleaning up failed sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.513792 env[1311]: time="2025-09-10T00:41:53.513766371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9rdtv,Uid:7c0bb760-61ca-4fc9-a88d-45f47a6eb434,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.514009 kubelet[2167]: E0910 00:41:53.513972 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.514089 kubelet[2167]: E0910 00:41:53.514034 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:53.514089 kubelet[2167]: E0910 00:41:53.514074 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9rdtv" Sep 10 00:41:53.514146 kubelet[2167]: E0910 00:41:53.514113 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-9rdtv_calico-system(7c0bb760-61ca-4fc9-a88d-45f47a6eb434)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-9rdtv_calico-system(7c0bb760-61ca-4fc9-a88d-45f47a6eb434)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9rdtv" podUID="7c0bb760-61ca-4fc9-a88d-45f47a6eb434" Sep 10 00:41:53.521885 env[1311]: time="2025-09-10T00:41:53.521832349Z" level=error msg="Failed to destroy network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.522441 env[1311]: time="2025-09-10T00:41:53.522404632Z" level=error msg="encountered an error cleaning up failed sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.522578 env[1311]: time="2025-09-10T00:41:53.522456640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc78c4547-tl96q,Uid:8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.522658 kubelet[2167]: E0910 00:41:53.522627 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.522704 kubelet[2167]: E0910 00:41:53.522674 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" Sep 10 00:41:53.522704 kubelet[2167]: E0910 00:41:53.522691 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" Sep 10 00:41:53.523180 kubelet[2167]: E0910 00:41:53.522728 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc78c4547-tl96q_calico-system(8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc78c4547-tl96q_calico-system(8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" podUID="8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4" Sep 10 00:41:53.525387 env[1311]: time="2025-09-10T00:41:53.525349429Z" level=error msg="Failed to destroy network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.525673 env[1311]: time="2025-09-10T00:41:53.525616590Z" level=error msg="encountered an error cleaning up failed sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.525673 env[1311]: time="2025-09-10T00:41:53.525662907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74dc4f84bd-6t868,Uid:f7c3f841-9e91-4b9c-998c-ca1e02e1d983,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.525899 kubelet[2167]: E0910 00:41:53.525844 2167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:53.526016 kubelet[2167]: E0910 00:41:53.525914 2167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74dc4f84bd-6t868" Sep 10 00:41:53.526016 kubelet[2167]: E0910 00:41:53.525934 2167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74dc4f84bd-6t868" Sep 10 00:41:53.526016 kubelet[2167]: E0910 00:41:53.525975 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74dc4f84bd-6t868_calico-system(f7c3f841-9e91-4b9c-998c-ca1e02e1d983)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74dc4f84bd-6t868_calico-system(f7c3f841-9e91-4b9c-998c-ca1e02e1d983)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74dc4f84bd-6t868" podUID="f7c3f841-9e91-4b9c-998c-ca1e02e1d983" Sep 10 00:41:54.170114 kubelet[2167]: I0910 00:41:54.170066 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:41:54.170948 env[1311]: time="2025-09-10T00:41:54.170907536Z" level=info msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" Sep 10 00:41:54.180092 kubelet[2167]: I0910 00:41:54.180034 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:41:54.180621 env[1311]: time="2025-09-10T00:41:54.180588933Z" level=info msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" Sep 10 00:41:54.181861 kubelet[2167]: I0910 00:41:54.181563 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:41:54.181967 env[1311]: time="2025-09-10T00:41:54.181943395Z" level=info msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" Sep 10 00:41:54.184919 kubelet[2167]: I0910 00:41:54.184894 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:41:54.185322 env[1311]: time="2025-09-10T00:41:54.185287591Z" level=info msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" Sep 10 00:41:54.186920 kubelet[2167]: I0910 00:41:54.186893 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:41:54.188686 env[1311]: time="2025-09-10T00:41:54.188647877Z" level=info msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" Sep 10 00:41:54.189872 kubelet[2167]: I0910 00:41:54.189846 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:41:54.190964 env[1311]: time="2025-09-10T00:41:54.190918117Z" level=info msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" Sep 10 00:41:54.191290 kubelet[2167]: I0910 00:41:54.191268 2167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:41:54.191779 env[1311]: time="2025-09-10T00:41:54.191751581Z" level=info msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" Sep 10 00:41:54.222517 env[1311]: time="2025-09-10T00:41:54.222453666Z" level=error msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" failed" error="failed to destroy network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.223375 kubelet[2167]: E0910 00:41:54.223186 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:41:54.223375 kubelet[2167]: E0910 00:41:54.223248 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d"} Sep 10 00:41:54.223375 kubelet[2167]: E0910 00:41:54.223298 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf3b90c9-768d-48d6-a148-e6a622704a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.223375 kubelet[2167]: E0910 00:41:54.223340 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf3b90c9-768d-48d6-a148-e6a622704a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-td5ft" podUID="cf3b90c9-768d-48d6-a148-e6a622704a6d" Sep 10 00:41:54.226550 env[1311]: time="2025-09-10T00:41:54.226501151Z" level=error msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" failed" error="failed to destroy network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.227022 kubelet[2167]: E0910 00:41:54.226866 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:41:54.227022 kubelet[2167]: E0910 00:41:54.226920 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d"} Sep 10 00:41:54.227022 kubelet[2167]: E0910 00:41:54.226963 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.227022 kubelet[2167]: E0910 00:41:54.226986 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" podUID="7d9b8a9b-0a8b-44fd-b257-93a929c46e2c" Sep 10 00:41:54.248546 env[1311]: time="2025-09-10T00:41:54.248481970Z" level=error msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" failed" error="failed to destroy network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.249190 kubelet[2167]: E0910 00:41:54.248986 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:41:54.249190 kubelet[2167]: E0910 00:41:54.249055 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5"} Sep 10 00:41:54.249190 kubelet[2167]: E0910 00:41:54.249113 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.249190 kubelet[2167]: E0910 00:41:54.249141 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dxvf6" podUID="ffbcf960-312a-4e1c-84c9-bb7a1a2c101f" Sep 10 00:41:54.251247 env[1311]: time="2025-09-10T00:41:54.251214849Z" level=error msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" failed" error="failed to destroy network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.251586 kubelet[2167]: E0910 00:41:54.251486 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:41:54.251586 kubelet[2167]: E0910 00:41:54.251515 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1"} Sep 10 00:41:54.251586 kubelet[2167]: E0910 00:41:54.251539 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.251586 kubelet[2167]: E0910 00:41:54.251555 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74dc4f84bd-6t868" podUID="f7c3f841-9e91-4b9c-998c-ca1e02e1d983" Sep 10 00:41:54.264106 env[1311]: time="2025-09-10T00:41:54.263994582Z" level=error msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" failed" error="failed to destroy network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.264355 kubelet[2167]: E0910 00:41:54.264288 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:41:54.264441 kubelet[2167]: E0910 00:41:54.264372 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4"} Sep 10 00:41:54.264441 kubelet[2167]: E0910 00:41:54.264408 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.264540 kubelet[2167]: E0910 00:41:54.264436 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" podUID="8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4" Sep 10 00:41:54.265869 env[1311]: time="2025-09-10T00:41:54.265819536Z" level=error msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" failed" error="failed to destroy network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.266314 kubelet[2167]: E0910 00:41:54.266131 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:41:54.266314 kubelet[2167]: E0910 00:41:54.266187 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86"} Sep 10 00:41:54.266314 kubelet[2167]: E0910 00:41:54.266241 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.266314 kubelet[2167]: E0910 00:41:54.266272 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" podUID="ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe" Sep 10 00:41:54.280365 env[1311]: time="2025-09-10T00:41:54.280288779Z" level=error msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" failed" error="failed to destroy network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:54.280575 kubelet[2167]: E0910 00:41:54.280539 2167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:41:54.280625 kubelet[2167]: E0910 00:41:54.280586 2167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30"} Sep 10 00:41:54.280653 kubelet[2167]: E0910 00:41:54.280622 2167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:54.280653 kubelet[2167]: E0910 00:41:54.280645 2167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c0bb760-61ca-4fc9-a88d-45f47a6eb434\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9rdtv" podUID="7c0bb760-61ca-4fc9-a88d-45f47a6eb434" Sep 10 00:41:54.339336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1-shm.mount: Deactivated successfully. Sep 10 00:41:54.339472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d-shm.mount: Deactivated successfully. Sep 10 00:41:54.339564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4-shm.mount: Deactivated successfully. Sep 10 00:41:54.339672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30-shm.mount: Deactivated successfully. Sep 10 00:41:54.339767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86-shm.mount: Deactivated successfully. Sep 10 00:41:54.339856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5-shm.mount: Deactivated successfully. Sep 10 00:41:58.873923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949193217.mount: Deactivated successfully. Sep 10 00:41:59.850068 env[1311]: time="2025-09-10T00:41:59.849995776Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:59.852316 env[1311]: time="2025-09-10T00:41:59.852265244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:59.854128 env[1311]: time="2025-09-10T00:41:59.854084257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:59.855893 env[1311]: time="2025-09-10T00:41:59.855849759Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:41:59.856283 env[1311]: time="2025-09-10T00:41:59.856244991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 00:41:59.865816 env[1311]: time="2025-09-10T00:41:59.865762247Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:42:00.299261 env[1311]: time="2025-09-10T00:42:00.299196594Z" level=info msg="CreateContainer within sandbox \"4a55fb8d076c65076417555042f8edae74324e54511bf6acb7ce8a7d151e9647\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"150ba4cceac4d3fc61f0bb09f9e788abbbd603ea6afdf6c6147bff2625ff143e\"" Sep 10 00:42:00.299944 env[1311]: time="2025-09-10T00:42:00.299903249Z" level=info msg="StartContainer for \"150ba4cceac4d3fc61f0bb09f9e788abbbd603ea6afdf6c6147bff2625ff143e\"" Sep 10 00:42:00.391974 env[1311]: time="2025-09-10T00:42:00.391913868Z" level=info msg="StartContainer for \"150ba4cceac4d3fc61f0bb09f9e788abbbd603ea6afdf6c6147bff2625ff143e\" returns successfully" Sep 10 00:42:00.426129 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:42:00.426389 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:42:00.528470 env[1311]: time="2025-09-10T00:42:00.528395528Z" level=info msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.608 [INFO][3474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.609 [INFO][3474] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" iface="eth0" netns="/var/run/netns/cni-f1d6c8c7-f7ba-c315-2e97-a136bf3f54bf" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.609 [INFO][3474] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" iface="eth0" netns="/var/run/netns/cni-f1d6c8c7-f7ba-c315-2e97-a136bf3f54bf" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.609 [INFO][3474] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" iface="eth0" netns="/var/run/netns/cni-f1d6c8c7-f7ba-c315-2e97-a136bf3f54bf" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.610 [INFO][3474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.610 [INFO][3474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.663 [INFO][3483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.663 [INFO][3483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.663 [INFO][3483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.672 [WARNING][3483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.672 [INFO][3483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.674 [INFO][3483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:00.679259 env[1311]: 2025-09-10 00:42:00.677 [INFO][3474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:00.680051 env[1311]: time="2025-09-10T00:42:00.680002986Z" level=info msg="TearDown network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" successfully" Sep 10 00:42:00.680123 env[1311]: time="2025-09-10T00:42:00.680050074Z" level=info msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" returns successfully" Sep 10 00:42:00.683076 systemd[1]: run-netns-cni\x2df1d6c8c7\x2df7ba\x2dc315\x2d2e97\x2da136bf3f54bf.mount: Deactivated successfully. Sep 10 00:42:00.800390 kubelet[2167]: I0910 00:42:00.800320 2167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-ca-bundle\") pod \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " Sep 10 00:42:00.800390 kubelet[2167]: I0910 00:42:00.800379 2167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-backend-key-pair\") pod \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " Sep 10 00:42:00.800801 kubelet[2167]: I0910 00:42:00.800784 2167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbrfm\" (UniqueName: \"kubernetes.io/projected/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-kube-api-access-pbrfm\") pod \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\" (UID: \"f7c3f841-9e91-4b9c-998c-ca1e02e1d983\") " Sep 10 00:42:00.800918 kubelet[2167]: I0910 00:42:00.800820 2167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f7c3f841-9e91-4b9c-998c-ca1e02e1d983" (UID: "f7c3f841-9e91-4b9c-998c-ca1e02e1d983"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:42:00.803681 kubelet[2167]: I0910 00:42:00.803636 2167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f7c3f841-9e91-4b9c-998c-ca1e02e1d983" (UID: "f7c3f841-9e91-4b9c-998c-ca1e02e1d983"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:42:00.803940 kubelet[2167]: I0910 00:42:00.803906 2167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-kube-api-access-pbrfm" (OuterVolumeSpecName: "kube-api-access-pbrfm") pod "f7c3f841-9e91-4b9c-998c-ca1e02e1d983" (UID: "f7c3f841-9e91-4b9c-998c-ca1e02e1d983"). InnerVolumeSpecName "kube-api-access-pbrfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:42:00.901627 kubelet[2167]: I0910 00:42:00.901596 2167 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:42:00.901627 kubelet[2167]: I0910 00:42:00.901626 2167 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:42:00.901735 kubelet[2167]: I0910 00:42:00.901637 2167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbrfm\" (UniqueName: \"kubernetes.io/projected/f7c3f841-9e91-4b9c-998c-ca1e02e1d983-kube-api-access-pbrfm\") on node \"localhost\" DevicePath \"\"" Sep 10 00:42:00.968919 systemd[1]: var-lib-kubelet-pods-f7c3f841\x2d9e91\x2d4b9c\x2d998c\x2dca1e02e1d983-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbrfm.mount: Deactivated successfully. Sep 10 00:42:00.969051 systemd[1]: var-lib-kubelet-pods-f7c3f841\x2d9e91\x2d4b9c\x2d998c\x2dca1e02e1d983-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:42:01.239669 kubelet[2167]: I0910 00:42:01.239424 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f6cnz" podStartSLOduration=2.482689877 podStartE2EDuration="23.239395774s" podCreationTimestamp="2025-09-10 00:41:38 +0000 UTC" firstStartedPulling="2025-09-10 00:41:39.100376666 +0000 UTC m=+22.289284758" lastFinishedPulling="2025-09-10 00:41:59.857082563 +0000 UTC m=+43.045990655" observedRunningTime="2025-09-10 00:42:01.225471963 +0000 UTC m=+44.414380055" watchObservedRunningTime="2025-09-10 00:42:01.239395774 +0000 UTC m=+44.428303896" Sep 10 00:42:01.404044 kubelet[2167]: I0910 00:42:01.403983 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16c2cba6-675f-4bf9-b575-d80d9cc652c8-whisker-ca-bundle\") pod \"whisker-5c4f8f5f79-87rvz\" (UID: \"16c2cba6-675f-4bf9-b575-d80d9cc652c8\") " pod="calico-system/whisker-5c4f8f5f79-87rvz" Sep 10 00:42:01.404044 kubelet[2167]: I0910 00:42:01.404030 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/16c2cba6-675f-4bf9-b575-d80d9cc652c8-whisker-backend-key-pair\") pod \"whisker-5c4f8f5f79-87rvz\" (UID: \"16c2cba6-675f-4bf9-b575-d80d9cc652c8\") " pod="calico-system/whisker-5c4f8f5f79-87rvz" Sep 10 00:42:01.404044 kubelet[2167]: I0910 00:42:01.404050 2167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwr8\" (UniqueName: \"kubernetes.io/projected/16c2cba6-675f-4bf9-b575-d80d9cc652c8-kube-api-access-lpwr8\") pod \"whisker-5c4f8f5f79-87rvz\" (UID: \"16c2cba6-675f-4bf9-b575-d80d9cc652c8\") " pod="calico-system/whisker-5c4f8f5f79-87rvz" Sep 10 00:42:01.583629 env[1311]: time="2025-09-10T00:42:01.583492883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4f8f5f79-87rvz,Uid:16c2cba6-675f-4bf9-b575-d80d9cc652c8,Namespace:calico-system,Attempt:0,}" Sep 10 00:42:01.711000 audit[3584]: AVC avc: denied { write } for pid=3584 comm="tee" name="fd" dev="proc" ino=26800 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.711000 audit[3584]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6a9007e1 a2=241 a3=1b6 items=1 ppid=3536 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.757002 kernel: audit: type=1400 audit(1757464921.711:283): avc: denied { write } for pid=3584 comm="tee" name="fd" dev="proc" ino=26800 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.757157 kernel: audit: type=1300 audit(1757464921.711:283): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6a9007e1 a2=241 a3=1b6 items=1 ppid=3536 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.711000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 10 00:42:01.759365 kernel: audit: type=1307 audit(1757464921.711:283): cwd="/etc/service/enabled/bird6/log" Sep 10 00:42:01.711000 audit: PATH item=0 name="/dev/fd/63" inode=26797 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.764778 kernel: audit: type=1302 audit(1757464921.711:283): item=0 name="/dev/fd/63" inode=26797 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.768057 kernel: audit: type=1327 audit(1757464921.711:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.719000 audit[3592]: AVC avc: denied { write } for pid=3592 comm="tee" name="fd" dev="proc" ino=25795 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.719000 audit[3592]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea6f4d7d2 a2=241 a3=1b6 items=1 ppid=3549 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.779493 kernel: audit: type=1400 audit(1757464921.719:284): avc: denied { write } for pid=3592 comm="tee" name="fd" dev="proc" ino=25795 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.779542 kernel: audit: type=1300 audit(1757464921.719:284): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea6f4d7d2 a2=241 a3=1b6 items=1 ppid=3549 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.779571 kernel: audit: type=1307 audit(1757464921.719:284): cwd="/etc/service/enabled/node-status-reporter/log" Sep 10 00:42:01.719000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 10 00:42:01.784726 kernel: audit: type=1302 audit(1757464921.719:284): item=0 name="/dev/fd/63" inode=25792 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.719000 audit: PATH item=0 name="/dev/fd/63" inode=25792 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.788215 kernel: audit: type=1327 audit(1757464921.719:284): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.719000 audit[3589]: AVC avc: denied { write } for pid=3589 comm="tee" name="fd" dev="proc" ino=25799 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.719000 audit[3589]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe59ccb7e3 a2=241 a3=1b6 items=1 ppid=3544 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.719000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 10 00:42:01.719000 audit: PATH item=0 name="/dev/fd/63" inode=25791 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.729000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=26805 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.729000 audit[3577]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc76f537e2 a2=241 a3=1b6 items=1 ppid=3535 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.729000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 10 00:42:01.729000 audit: PATH item=0 name="/dev/fd/63" inode=24259 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.733000 audit[3599]: AVC avc: denied { write } for pid=3599 comm="tee" name="fd" dev="proc" ino=24843 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.733000 audit[3599]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce92777d1 a2=241 a3=1b6 items=1 ppid=3540 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.733000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 10 00:42:01.733000 audit: PATH item=0 name="/dev/fd/63" inode=24837 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.797000 audit[3613]: AVC avc: denied { write } for pid=3613 comm="tee" name="fd" dev="proc" ino=24264 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.797000 audit[3613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd05b607e1 a2=241 a3=1b6 items=1 ppid=3546 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.797000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 10 00:42:01.797000 audit: PATH item=0 name="/dev/fd/63" inode=24846 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.798000 audit[3616]: AVC avc: denied { write } for pid=3616 comm="tee" name="fd" dev="proc" ino=24268 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 10 00:42:01.798000 audit[3616]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedd3a37e1 a2=241 a3=1b6 items=1 ppid=3537 pid=3616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.798000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 10 00:42:01.798000 audit: PATH item=0 name="/dev/fd/63" inode=24847 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:42:01.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit: BPF prog-id=10 op=LOAD Sep 10 00:42:01.891000 audit[3639]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff609dea50 a2=98 a3=1fffffffffffffff items=0 ppid=3538 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 10 00:42:01.891000 audit: BPF prog-id=10 op=UNLOAD Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.891000 audit: BPF prog-id=11 op=LOAD Sep 10 00:42:01.891000 audit[3639]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff609de930 a2=94 a3=3 items=0 ppid=3538 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 10 00:42:01.892000 audit: BPF prog-id=11 op=UNLOAD Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit: BPF prog-id=12 op=LOAD Sep 10 00:42:01.892000 audit[3639]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff609de970 a2=94 a3=7fff609deb50 items=0 ppid=3538 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 10 00:42:01.892000 audit: BPF prog-id=12 op=UNLOAD Sep 10 00:42:01.892000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.892000 audit[3639]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff609dea40 a2=50 a3=a000000085 items=0 ppid=3538 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit: BPF prog-id=13 op=LOAD Sep 10 00:42:01.893000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4e957630 a2=98 a3=3 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.893000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:01.893000 audit: BPF prog-id=13 op=UNLOAD Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit: BPF prog-id=14 op=LOAD Sep 10 00:42:01.893000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4e957420 a2=94 a3=54428f items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.893000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:01.893000 audit: BPF prog-id=14 op=UNLOAD Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:01.893000 audit: BPF prog-id=15 op=LOAD Sep 10 00:42:01.893000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4e957450 a2=94 a3=2 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:01.893000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:01.893000 audit: BPF prog-id=15 op=UNLOAD Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit: BPF prog-id=16 op=LOAD Sep 10 00:42:02.028000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd4e957310 a2=94 a3=1 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.028000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.028000 audit: BPF prog-id=16 op=UNLOAD Sep 10 00:42:02.028000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.028000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd4e9573e0 a2=50 a3=7ffd4e9574c0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.028000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e957320 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4e957350 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4e957260 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e957370 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e957350 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e957340 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e957370 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4e957350 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4e957370 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd4e957340 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd4e9573b0 a2=28 a3=0 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd4e957160 a2=50 a3=1 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit: BPF prog-id=17 op=LOAD Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd4e957160 a2=94 a3=5 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit: BPF prog-id=17 op=UNLOAD Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd4e957210 a2=50 a3=1 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd4e957330 a2=4 a3=38 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.036000 audit[3641]: AVC avc: denied { confidentiality } for pid=3641 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.036000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd4e957380 a2=94 a3=6 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.036000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { confidentiality } for pid=3641 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.037000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd4e956b30 a2=94 a3=88 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { perfmon } for pid=3641 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { bpf } for pid=3641 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.037000 audit[3641]: AVC avc: denied { confidentiality } for pid=3641 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.037000 audit[3641]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd4e956b30 a2=94 a3=88 items=0 ppid=3538 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit: BPF prog-id=18 op=LOAD Sep 10 00:42:02.044000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa282f1d0 a2=98 a3=1999999999999999 items=0 ppid=3538 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.044000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 10 00:42:02.044000 audit: BPF prog-id=18 op=UNLOAD Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit: BPF prog-id=19 op=LOAD Sep 10 00:42:02.044000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa282f0b0 a2=94 a3=ffff items=0 ppid=3538 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.044000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 10 00:42:02.044000 audit: BPF prog-id=19 op=UNLOAD Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { perfmon } for pid=3661 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit[3661]: AVC avc: denied { bpf } for pid=3661 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.044000 audit: BPF prog-id=20 op=LOAD Sep 10 00:42:02.044000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa282f0f0 a2=94 a3=7fffa282f2d0 items=0 ppid=3538 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.044000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 10 00:42:02.044000 audit: BPF prog-id=20 op=UNLOAD Sep 10 00:42:02.167552 systemd-networkd[1075]: vxlan.calico: Link UP Sep 10 00:42:02.167561 systemd-networkd[1075]: vxlan.calico: Gained carrier Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.187000 audit: BPF prog-id=21 op=LOAD Sep 10 00:42:02.187000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8c3ba310 a2=98 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.187000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.188000 audit: BPF prog-id=21 op=UNLOAD Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit: BPF prog-id=22 op=LOAD Sep 10 00:42:02.188000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8c3ba120 a2=94 a3=54428f items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.188000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.188000 audit: BPF prog-id=22 op=UNLOAD Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.188000 audit: BPF prog-id=23 op=LOAD Sep 10 00:42:02.188000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8c3ba150 a2=94 a3=2 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.188000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit: BPF prog-id=23 op=UNLOAD Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba020 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8c3ba050 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8c3b9f60 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba070 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba050 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba040 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba070 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8c3ba050 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8c3ba070 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8c3ba040 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe8c3ba0b0 a2=28 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.189000 audit: BPF prog-id=24 op=LOAD Sep 10 00:42:02.189000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8c3b9f20 a2=94 a3=0 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.189000 audit: BPF prog-id=24 op=UNLOAD Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe8c3b9f10 a2=50 a3=2800 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe8c3b9f10 a2=50 a3=2800 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit: BPF prog-id=25 op=LOAD Sep 10 00:42:02.190000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8c3b9730 a2=94 a3=2 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.190000 audit: BPF prog-id=25 op=UNLOAD Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { perfmon } for pid=3687 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit[3687]: AVC avc: denied { bpf } for pid=3687 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.190000 audit: BPF prog-id=26 op=LOAD Sep 10 00:42:02.190000 audit[3687]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8c3b9830 a2=94 a3=30 items=0 ppid=3538 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.196000 audit: BPF prog-id=27 op=LOAD Sep 10 00:42:02.196000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff761004c0 a2=98 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.197000 audit: BPF prog-id=27 op=UNLOAD Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit: BPF prog-id=28 op=LOAD Sep 10 00:42:02.198000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff761002b0 a2=94 a3=54428f items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.198000 audit: BPF prog-id=28 op=UNLOAD Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.198000 audit: BPF prog-id=29 op=LOAD Sep 10 00:42:02.198000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff761002e0 a2=94 a3=2 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.198000 audit: BPF prog-id=29 op=UNLOAD Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit: BPF prog-id=30 op=LOAD Sep 10 00:42:02.317000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff761001a0 a2=94 a3=1 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.317000 audit: BPF prog-id=30 op=UNLOAD Sep 10 00:42:02.317000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.317000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff76100270 a2=50 a3=7fff76100350 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.317000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff761001b0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff761001e0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff761000f0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff76100200 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff761001e0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff761001d0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff76100200 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff761001e0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff76100200 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff761001d0 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.326000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.326000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff76100240 a2=28 a3=0 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.326000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff760ffff0 a2=50 a3=1 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit: BPF prog-id=31 op=LOAD Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff760ffff0 a2=94 a3=5 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit: BPF prog-id=31 op=UNLOAD Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff761000a0 a2=50 a3=1 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff761001c0 a2=4 a3=38 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { confidentiality } for pid=3690 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff76100210 a2=94 a3=6 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { confidentiality } for pid=3690 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff760ff9c0 a2=94 a3=88 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { perfmon } for pid=3690 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.327000 audit[3690]: AVC avc: denied { confidentiality } for pid=3690 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 10 00:42:02.327000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff760ff9c0 a2=94 a3=88 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.327000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.328000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.328000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff761013f0 a2=10 a3=208 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.328000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.328000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff76101290 a2=10 a3=3 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.328000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.328000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff76101230 a2=10 a3=3 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.328000 audit[3690]: AVC avc: denied { bpf } for pid=3690 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 10 00:42:02.328000 audit[3690]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff76101230 a2=10 a3=7 items=0 ppid=3538 pid=3690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 10 00:42:02.335000 audit: BPF prog-id=26 op=UNLOAD Sep 10 00:42:02.465000 audit[3741]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:02.465000 audit[3741]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffea3018160 a2=0 a3=7ffea301814c items=0 ppid=3538 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:02.573000 audit[3740]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:02.573000 audit[3740]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdd6683570 a2=0 a3=7ffdd668355c items=0 ppid=3538 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:02.574000 audit[3744]: NETFILTER_CFG table=filter:103 family=2 entries=39 op=nft_register_chain pid=3744 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:02.574000 audit[3744]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffe60009430 a2=0 a3=7ffe6000941c items=0 ppid=3538 pid=3744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.574000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:02.580000 audit[3739]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:02.580000 audit[3739]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffef05f17a0 a2=0 a3=7ffef05f178c items=0 ppid=3538 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:02.580000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:02.911937 kubelet[2167]: I0910 00:42:02.911711 2167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c3f841-9e91-4b9c-998c-ca1e02e1d983" path="/var/lib/kubelet/pods/f7c3f841-9e91-4b9c-998c-ca1e02e1d983/volumes" Sep 10 00:42:03.671313 systemd-networkd[1075]: cali1271c76b7c9: Link UP Sep 10 00:42:03.674856 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 10 00:42:03.675014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1271c76b7c9: link becomes ready Sep 10 00:42:03.674943 systemd-networkd[1075]: cali1271c76b7c9: Gained carrier Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.347 [INFO][3751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0 whisker-5c4f8f5f79- calico-system 16c2cba6-675f-4bf9-b575-d80d9cc652c8 908 0 2025-09-10 00:42:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c4f8f5f79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c4f8f5f79-87rvz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1271c76b7c9 [] [] }} ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.348 [INFO][3751] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.378 [INFO][3766] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" HandleID="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Workload="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.379 [INFO][3766] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" HandleID="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Workload="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c4f8f5f79-87rvz", "timestamp":"2025-09-10 00:42:03.378804287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.379 [INFO][3766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.379 [INFO][3766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.379 [INFO][3766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.389 [INFO][3766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.398 [INFO][3766] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.404 [INFO][3766] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.406 [INFO][3766] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.409 [INFO][3766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.409 [INFO][3766] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.411 [INFO][3766] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62 Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.512 [INFO][3766] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.663 [INFO][3766] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.663 [INFO][3766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" host="localhost" Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.664 [INFO][3766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:03.741907 env[1311]: 2025-09-10 00:42:03.664 [INFO][3766] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" HandleID="k8s-pod-network.43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Workload="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.667 [INFO][3751] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0", GenerateName:"whisker-5c4f8f5f79-", Namespace:"calico-system", SelfLink:"", UID:"16c2cba6-675f-4bf9-b575-d80d9cc652c8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c4f8f5f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c4f8f5f79-87rvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1271c76b7c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.667 [INFO][3751] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.667 [INFO][3751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1271c76b7c9 ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.675 [INFO][3751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.676 [INFO][3751] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0", GenerateName:"whisker-5c4f8f5f79-", Namespace:"calico-system", SelfLink:"", UID:"16c2cba6-675f-4bf9-b575-d80d9cc652c8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 42, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c4f8f5f79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62", Pod:"whisker-5c4f8f5f79-87rvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1271c76b7c9", MAC:"76:9c:fa:09:4d:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:03.743195 env[1311]: 2025-09-10 00:42:03.737 [INFO][3751] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62" Namespace="calico-system" Pod="whisker-5c4f8f5f79-87rvz" WorkloadEndpoint="localhost-k8s-whisker--5c4f8f5f79--87rvz-eth0" Sep 10 00:42:03.882649 systemd-networkd[1075]: vxlan.calico: Gained IPv6LL Sep 10 00:42:03.872000 audit[3787]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=3787 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:03.872000 audit[3787]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7fff300c9e10 a2=0 a3=7fff300c9dfc items=0 ppid=3538 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:03.872000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:03.998960 env[1311]: time="2025-09-10T00:42:03.998851366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:03.998960 env[1311]: time="2025-09-10T00:42:03.998909855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:03.999250 env[1311]: time="2025-09-10T00:42:03.998923621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:03.999497 env[1311]: time="2025-09-10T00:42:03.999419832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62 pid=3800 runtime=io.containerd.runc.v2 Sep 10 00:42:04.034796 systemd[1]: run-containerd-runc-k8s.io-43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62-runc.fT5Bhc.mount: Deactivated successfully. Sep 10 00:42:04.062182 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:04.100376 env[1311]: time="2025-09-10T00:42:04.100217632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4f8f5f79-87rvz,Uid:16c2cba6-675f-4bf9-b575-d80d9cc652c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62\"" Sep 10 00:42:04.102886 env[1311]: time="2025-09-10T00:42:04.102833098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:42:04.973209 systemd-networkd[1075]: cali1271c76b7c9: Gained IPv6LL Sep 10 00:42:05.901588 env[1311]: time="2025-09-10T00:42:05.901363247Z" level=info msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" Sep 10 00:42:05.902151 env[1311]: time="2025-09-10T00:42:05.901837073Z" level=info msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" Sep 10 00:42:05.902250 env[1311]: time="2025-09-10T00:42:05.902216017Z" level=info msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" Sep 10 00:42:05.902676 env[1311]: time="2025-09-10T00:42:05.902592955Z" level=info msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" Sep 10 00:42:06.020323 env[1311]: time="2025-09-10T00:42:06.020264220Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:06.029344 env[1311]: time="2025-09-10T00:42:06.029275602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:06.034579 env[1311]: time="2025-09-10T00:42:06.034134902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:06.041226 env[1311]: time="2025-09-10T00:42:06.041170176Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:06.042165 env[1311]: time="2025-09-10T00:42:06.042078561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 00:42:06.046730 env[1311]: time="2025-09-10T00:42:06.046626710Z" level=info msg="CreateContainer within sandbox \"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:42:06.090433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202513310.mount: Deactivated successfully. Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.011 [INFO][3869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.011 [INFO][3869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" iface="eth0" netns="/var/run/netns/cni-459f306d-1094-d3b2-1c56-b8972274b3d9" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.012 [INFO][3869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" iface="eth0" netns="/var/run/netns/cni-459f306d-1094-d3b2-1c56-b8972274b3d9" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.012 [INFO][3869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" iface="eth0" netns="/var/run/netns/cni-459f306d-1094-d3b2-1c56-b8972274b3d9" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.012 [INFO][3869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.012 [INFO][3869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.069 [INFO][3907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.071 [INFO][3907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.071 [INFO][3907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.083 [WARNING][3907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.083 [INFO][3907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.086 [INFO][3907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.098842 env[1311]: 2025-09-10 00:42:06.096 [INFO][3869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:06.102777 systemd[1]: run-netns-cni\x2d459f306d\x2d1094\x2dd3b2\x2d1c56\x2db8972274b3d9.mount: Deactivated successfully. Sep 10 00:42:06.104289 env[1311]: time="2025-09-10T00:42:06.104214364Z" level=info msg="TearDown network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" successfully" Sep 10 00:42:06.104289 env[1311]: time="2025-09-10T00:42:06.104287896Z" level=info msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" returns successfully" Sep 10 00:42:06.105010 kubelet[2167]: E0910 00:42:06.104868 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:06.107170 env[1311]: time="2025-09-10T00:42:06.107125289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-td5ft,Uid:cf3b90c9-768d-48d6-a148-e6a622704a6d,Namespace:kube-system,Attempt:1,}" Sep 10 00:42:06.110532 env[1311]: time="2025-09-10T00:42:06.110458059Z" level=info msg="CreateContainer within sandbox \"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6d30561b58182f433d9d0be917ac63f7424dbfbef1df07e145ee66f8c8405bfd\"" Sep 10 00:42:06.111158 env[1311]: time="2025-09-10T00:42:06.111131550Z" level=info msg="StartContainer for \"6d30561b58182f433d9d0be917ac63f7424dbfbef1df07e145ee66f8c8405bfd\"" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.038 [INFO][3870] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.038 [INFO][3870] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" iface="eth0" netns="/var/run/netns/cni-4a6ee903-70af-6db8-b600-a2577bd0eed1" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.038 [INFO][3870] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" iface="eth0" netns="/var/run/netns/cni-4a6ee903-70af-6db8-b600-a2577bd0eed1" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.039 [INFO][3870] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" iface="eth0" netns="/var/run/netns/cni-4a6ee903-70af-6db8-b600-a2577bd0eed1" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.039 [INFO][3870] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.039 [INFO][3870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.123 [INFO][3915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.123 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.123 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.148 [WARNING][3915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.148 [INFO][3915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.154 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.161022 env[1311]: 2025-09-10 00:42:06.157 [INFO][3870] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:06.161022 env[1311]: time="2025-09-10T00:42:06.160129188Z" level=info msg="TearDown network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" successfully" Sep 10 00:42:06.161022 env[1311]: time="2025-09-10T00:42:06.160187630Z" level=info msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" returns successfully" Sep 10 00:42:06.161833 env[1311]: time="2025-09-10T00:42:06.161664905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9rdtv,Uid:7c0bb760-61ca-4fc9-a88d-45f47a6eb434,Namespace:calico-system,Attempt:1,}" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.067 [INFO][3891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.070 [INFO][3891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" iface="eth0" netns="/var/run/netns/cni-ee9751e1-d652-f145-c56c-2356ae89ea66" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.070 [INFO][3891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" iface="eth0" netns="/var/run/netns/cni-ee9751e1-d652-f145-c56c-2356ae89ea66" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.071 [INFO][3891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" iface="eth0" netns="/var/run/netns/cni-ee9751e1-d652-f145-c56c-2356ae89ea66" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.071 [INFO][3891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.071 [INFO][3891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.132 [INFO][3930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.132 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.154 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.171 [WARNING][3930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.171 [INFO][3930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.179 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.186651 env[1311]: 2025-09-10 00:42:06.184 [INFO][3891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:06.187773 env[1311]: time="2025-09-10T00:42:06.187692827Z" level=info msg="TearDown network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" successfully" Sep 10 00:42:06.187911 env[1311]: time="2025-09-10T00:42:06.187874519Z" level=info msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" returns successfully" Sep 10 00:42:06.188972 env[1311]: time="2025-09-10T00:42:06.188938786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-qnnws,Uid:ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.068 [INFO][3897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.070 [INFO][3897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" iface="eth0" netns="/var/run/netns/cni-1103c3df-e98d-26cd-9ac2-96bb168872bc" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.070 [INFO][3897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" iface="eth0" netns="/var/run/netns/cni-1103c3df-e98d-26cd-9ac2-96bb168872bc" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.071 [INFO][3897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" iface="eth0" netns="/var/run/netns/cni-1103c3df-e98d-26cd-9ac2-96bb168872bc" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.071 [INFO][3897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.071 [INFO][3897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.154 [INFO][3924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.154 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.179 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.189 [WARNING][3924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.189 [INFO][3924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.197 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.206059 env[1311]: 2025-09-10 00:42:06.202 [INFO][3897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:06.208186 env[1311]: time="2025-09-10T00:42:06.208100301Z" level=info msg="TearDown network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" successfully" Sep 10 00:42:06.208442 env[1311]: time="2025-09-10T00:42:06.208408095Z" level=info msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" returns successfully" Sep 10 00:42:06.211700 env[1311]: time="2025-09-10T00:42:06.209879970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k7vf2,Uid:6e46e9e0-10bc-4c50-9705-59d1dee4c692,Namespace:calico-system,Attempt:1,}" Sep 10 00:42:06.248321 env[1311]: time="2025-09-10T00:42:06.248194203Z" level=info msg="StartContainer for \"6d30561b58182f433d9d0be917ac63f7424dbfbef1df07e145ee66f8c8405bfd\" returns successfully" Sep 10 00:42:06.254676 env[1311]: time="2025-09-10T00:42:06.254614419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:42:06.395583 systemd-networkd[1075]: cali6e49f376bd9: Link UP Sep 10 00:42:06.402749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 10 00:42:06.402887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6e49f376bd9: link becomes ready Sep 10 00:42:06.403089 systemd-networkd[1075]: cali6e49f376bd9: Gained carrier Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.224 [INFO][3955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0 coredns-7c65d6cfc9- kube-system cf3b90c9-768d-48d6-a148-e6a622704a6d 929 0 2025-09-10 00:41:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-td5ft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e49f376bd9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.224 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.286 [INFO][3997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" HandleID="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.286 [INFO][3997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" HandleID="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-td5ft", "timestamp":"2025-09-10 00:42:06.286559536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.286 [INFO][3997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.287 [INFO][3997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.287 [INFO][3997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.313 [INFO][3997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.336 [INFO][3997] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.348 [INFO][3997] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.350 [INFO][3997] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.356 [INFO][3997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.356 [INFO][3997] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.360 [INFO][3997] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.368 [INFO][3997] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.385 [INFO][3997] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.385 [INFO][3997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" host="localhost" Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.386 [INFO][3997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.430673 env[1311]: 2025-09-10 00:42:06.387 [INFO][3997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" HandleID="k8s-pod-network.9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.389 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf3b90c9-768d-48d6-a148-e6a622704a6d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-td5ft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e49f376bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.391 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.391 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e49f376bd9 ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.404 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.404 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf3b90c9-768d-48d6-a148-e6a622704a6d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea", Pod:"coredns-7c65d6cfc9-td5ft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e49f376bd9", MAC:"e6:76:c5:29:40:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.432012 env[1311]: 2025-09-10 00:42:06.424 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-td5ft" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:06.453000 audit[4074]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:06.453000 audit[4074]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffc89e6de10 a2=0 a3=7ffc89e6ddfc items=0 ppid=3538 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:06.453000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:06.465853 env[1311]: time="2025-09-10T00:42:06.464966588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:06.465853 env[1311]: time="2025-09-10T00:42:06.465037184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:06.465853 env[1311]: time="2025-09-10T00:42:06.465051342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:06.465853 env[1311]: time="2025-09-10T00:42:06.465306023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea pid=4082 runtime=io.containerd.runc.v2 Sep 10 00:42:06.520263 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:06.521401 systemd-networkd[1075]: cali06a6c55b673: Link UP Sep 10 00:42:06.531752 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali06a6c55b673: link becomes ready Sep 10 00:42:06.532596 systemd-networkd[1075]: cali06a6c55b673: Gained carrier Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.278 [INFO][3976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--9rdtv-eth0 goldmane-7988f88666- calico-system 7c0bb760-61ca-4fc9-a88d-45f47a6eb434 930 0 2025-09-10 00:41:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-9rdtv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali06a6c55b673 [] [] }} ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.278 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.346 [INFO][4028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" HandleID="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.347 [INFO][4028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" HandleID="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-9rdtv", "timestamp":"2025-09-10 00:42:06.346580332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.347 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.385 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.385 [INFO][4028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.415 [INFO][4028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.446 [INFO][4028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.458 [INFO][4028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.464 [INFO][4028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.469 [INFO][4028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.469 [INFO][4028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.473 [INFO][4028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568 Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.489 [INFO][4028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.505 [INFO][4028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.506 [INFO][4028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" host="localhost" Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.506 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.559711 env[1311]: 2025-09-10 00:42:06.506 [INFO][4028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" HandleID="k8s-pod-network.ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.514 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9rdtv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7c0bb760-61ca-4fc9-a88d-45f47a6eb434", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-9rdtv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali06a6c55b673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.515 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.515 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06a6c55b673 ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.531 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.533 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9rdtv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7c0bb760-61ca-4fc9-a88d-45f47a6eb434", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568", Pod:"goldmane-7988f88666-9rdtv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali06a6c55b673", MAC:"9a:c2:b8:15:44:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.560641 env[1311]: 2025-09-10 00:42:06.555 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568" Namespace="calico-system" Pod="goldmane-7988f88666-9rdtv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:06.561788 env[1311]: time="2025-09-10T00:42:06.561745431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-td5ft,Uid:cf3b90c9-768d-48d6-a148-e6a622704a6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea\"" Sep 10 00:42:06.564888 kubelet[2167]: E0910 00:42:06.563224 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:06.577976 env[1311]: time="2025-09-10T00:42:06.577923742Z" level=info msg="CreateContainer within sandbox \"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:42:06.578000 audit[4131]: NETFILTER_CFG table=filter:107 family=2 entries=48 op=nft_register_chain pid=4131 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:06.578000 audit[4131]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffca7b56460 a2=0 a3=7ffca7b5644c items=0 ppid=3538 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:06.578000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:06.600586 env[1311]: time="2025-09-10T00:42:06.600469436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:06.600903 env[1311]: time="2025-09-10T00:42:06.600809113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:06.601634 env[1311]: time="2025-09-10T00:42:06.600880631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:06.601634 env[1311]: time="2025-09-10T00:42:06.601295884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568 pid=4141 runtime=io.containerd.runc.v2 Sep 10 00:42:06.622260 env[1311]: time="2025-09-10T00:42:06.622182221Z" level=info msg="CreateContainer within sandbox \"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ca8ff219e66cd1b3cd59ca5c5aa02e024d2fffd6bdb394a345443c7887754b6\"" Sep 10 00:42:06.624416 env[1311]: time="2025-09-10T00:42:06.623919589Z" level=info msg="StartContainer for \"6ca8ff219e66cd1b3cd59ca5c5aa02e024d2fffd6bdb394a345443c7887754b6\"" Sep 10 00:42:06.636822 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali78dfd0a03c5: link becomes ready Sep 10 00:42:06.637380 systemd-networkd[1075]: cali78dfd0a03c5: Link UP Sep 10 00:42:06.642497 systemd-networkd[1075]: cali78dfd0a03c5: Gained carrier Sep 10 00:42:06.666569 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.391 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k7vf2-eth0 csi-node-driver- calico-system 6e46e9e0-10bc-4c50-9705-59d1dee4c692 933 0 2025-09-10 00:41:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k7vf2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali78dfd0a03c5 [] [] }} ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.391 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.461 [INFO][4060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" HandleID="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.462 [INFO][4060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" HandleID="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a53a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k7vf2", "timestamp":"2025-09-10 00:42:06.46178006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.462 [INFO][4060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.506 [INFO][4060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.506 [INFO][4060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.522 [INFO][4060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.548 [INFO][4060] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.574 [INFO][4060] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.581 [INFO][4060] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.585 [INFO][4060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.586 [INFO][4060] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.591 [INFO][4060] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.609 [INFO][4060] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.619 [INFO][4060] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.620 [INFO][4060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" host="localhost" Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.620 [INFO][4060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.687643 env[1311]: 2025-09-10 00:42:06.620 [INFO][4060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" HandleID="k8s-pod-network.a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.628 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k7vf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e46e9e0-10bc-4c50-9705-59d1dee4c692", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k7vf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78dfd0a03c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.629 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.629 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78dfd0a03c5 ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.638 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.639 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k7vf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e46e9e0-10bc-4c50-9705-59d1dee4c692", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a", Pod:"csi-node-driver-k7vf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78dfd0a03c5", MAC:"9a:a6:97:96:6c:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.688984 env[1311]: 2025-09-10 00:42:06.662 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a" Namespace="calico-system" Pod="csi-node-driver-k7vf2" WorkloadEndpoint="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:06.698000 audit[4186]: NETFILTER_CFG table=filter:108 family=2 entries=44 op=nft_register_chain pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:06.698000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffcbb508d00 a2=0 a3=7ffcbb508cec items=0 ppid=3538 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:06.698000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:06.734389 env[1311]: time="2025-09-10T00:42:06.732152414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9rdtv,Uid:7c0bb760-61ca-4fc9-a88d-45f47a6eb434,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568\"" Sep 10 00:42:06.740026 env[1311]: time="2025-09-10T00:42:06.739758912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:06.740026 env[1311]: time="2025-09-10T00:42:06.739880036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:06.740026 env[1311]: time="2025-09-10T00:42:06.739897870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:06.740559 env[1311]: time="2025-09-10T00:42:06.740211306Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a pid=4213 runtime=io.containerd.runc.v2 Sep 10 00:42:06.763059 systemd-networkd[1075]: cali59c7f603008: Link UP Sep 10 00:42:06.768372 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali59c7f603008: link becomes ready Sep 10 00:42:06.769096 systemd-networkd[1075]: cali59c7f603008: Gained carrier Sep 10 00:42:06.798467 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:06.800071 env[1311]: time="2025-09-10T00:42:06.799996798Z" level=info msg="StartContainer for \"6ca8ff219e66cd1b3cd59ca5c5aa02e024d2fffd6bdb394a345443c7887754b6\" returns successfully" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.365 [INFO][4014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0 calico-apiserver-c88bffbdf- calico-apiserver ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe 932 0 2025-09-10 00:41:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c88bffbdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c88bffbdf-qnnws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59c7f603008 [] [] }} ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.365 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.470 [INFO][4050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" HandleID="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.470 [INFO][4050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" HandleID="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c88bffbdf-qnnws", "timestamp":"2025-09-10 00:42:06.47010175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.471 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.621 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.621 [INFO][4050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.637 [INFO][4050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.670 [INFO][4050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.690 [INFO][4050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.695 [INFO][4050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.700 [INFO][4050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.700 [INFO][4050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.712 [INFO][4050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4 Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.726 [INFO][4050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.750 [INFO][4050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.750 [INFO][4050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" host="localhost" Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.750 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:06.817189 env[1311]: 2025-09-10 00:42:06.750 [INFO][4050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" HandleID="k8s-pod-network.36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.758 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c88bffbdf-qnnws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c7f603008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.758 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.758 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59c7f603008 ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.771 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.777 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4", Pod:"calico-apiserver-c88bffbdf-qnnws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c7f603008", MAC:"4e:55:7b:8c:cb:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:06.818036 env[1311]: 2025-09-10 00:42:06.810 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-qnnws" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:06.830774 env[1311]: time="2025-09-10T00:42:06.830688243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k7vf2,Uid:6e46e9e0-10bc-4c50-9705-59d1dee4c692,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a\"" Sep 10 00:42:06.835000 audit[4264]: NETFILTER_CFG table=filter:109 family=2 entries=62 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:06.838365 kernel: kauditd_printk_skb: 559 callbacks suppressed Sep 10 00:42:06.838462 kernel: audit: type=1325 audit(1757464926.835:396): table=filter:109 family=2 entries=62 op=nft_register_chain pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:06.835000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffe124af0b0 a2=0 a3=7ffe124af09c items=0 ppid=3538 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:06.847694 env[1311]: time="2025-09-10T00:42:06.844212004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:06.847694 env[1311]: time="2025-09-10T00:42:06.844273202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:06.847694 env[1311]: time="2025-09-10T00:42:06.844288231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:06.847694 env[1311]: time="2025-09-10T00:42:06.844676432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4 pid=4275 runtime=io.containerd.runc.v2 Sep 10 00:42:06.852419 kernel: audit: type=1300 audit(1757464926.835:396): arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffe124af0b0 a2=0 a3=7ffe124af09c items=0 ppid=3538 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:06.835000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:06.859944 kernel: audit: type=1327 audit(1757464926.835:396): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:06.894316 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:06.903107 env[1311]: time="2025-09-10T00:42:06.903011781Z" level=info msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" Sep 10 00:42:06.938205 env[1311]: time="2025-09-10T00:42:06.938011240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-qnnws,Uid:ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4\"" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:06.999 [INFO][4316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:06.999 [INFO][4316] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" iface="eth0" netns="/var/run/netns/cni-59a96247-32c7-020e-a6fa-2b8c59a32657" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.000 [INFO][4316] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" iface="eth0" netns="/var/run/netns/cni-59a96247-32c7-020e-a6fa-2b8c59a32657" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.000 [INFO][4316] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" iface="eth0" netns="/var/run/netns/cni-59a96247-32c7-020e-a6fa-2b8c59a32657" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.000 [INFO][4316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.000 [INFO][4316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.053 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.054 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.054 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.067 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.068 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.074 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:07.081120 env[1311]: 2025-09-10 00:42:07.077 [INFO][4316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:07.084657 env[1311]: time="2025-09-10T00:42:07.084597664Z" level=info msg="TearDown network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" successfully" Sep 10 00:42:07.084862 env[1311]: time="2025-09-10T00:42:07.084816155Z" level=info msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" returns successfully" Sep 10 00:42:07.087578 env[1311]: time="2025-09-10T00:42:07.087531377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc78c4547-tl96q,Uid:8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4,Namespace:calico-system,Attempt:1,}" Sep 10 00:42:07.093768 systemd[1]: run-netns-cni\x2d59a96247\x2d32c7\x2d020e\x2da6fa\x2d2b8c59a32657.mount: Deactivated successfully. Sep 10 00:42:07.093990 systemd[1]: run-netns-cni\x2d4a6ee903\x2d70af\x2d6db8\x2db600\x2da2577bd0eed1.mount: Deactivated successfully. Sep 10 00:42:07.094137 systemd[1]: run-netns-cni\x2dee9751e1\x2dd652\x2df145\x2dc56c\x2d2356ae89ea66.mount: Deactivated successfully. Sep 10 00:42:07.094254 systemd[1]: run-netns-cni\x2d1103c3df\x2de98d\x2d26cd\x2d9ac2\x2d96bb168872bc.mount: Deactivated successfully. Sep 10 00:42:07.243677 kubelet[2167]: E0910 00:42:07.242624 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:07.300000 audit[4368]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:07.300000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc639dacd0 a2=0 a3=7ffc639dacbc items=0 ppid=2276 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:07.312540 kernel: audit: type=1325 audit(1757464927.300:397): table=filter:110 family=2 entries=20 op=nft_register_rule pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:07.312747 kernel: audit: type=1300 audit(1757464927.300:397): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc639dacd0 a2=0 a3=7ffc639dacbc items=0 ppid=2276 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:07.312795 kernel: audit: type=1327 audit(1757464927.300:397): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:07.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:07.321000 audit[4368]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:07.321000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc639dacd0 a2=0 a3=0 items=0 ppid=2276 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:07.334091 kernel: audit: type=1325 audit(1757464927.321:398): table=nat:111 family=2 entries=14 op=nft_register_rule pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:07.334307 kernel: audit: type=1300 audit(1757464927.321:398): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc639dacd0 a2=0 a3=0 items=0 ppid=2276 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:07.334375 kernel: audit: type=1327 audit(1757464927.321:398): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:07.321000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:07.343296 systemd-networkd[1075]: caliad7d140b259: Link UP Sep 10 00:42:07.345250 systemd-networkd[1075]: caliad7d140b259: Gained carrier Sep 10 00:42:07.345511 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliad7d140b259: link becomes ready Sep 10 00:42:07.363690 kubelet[2167]: I0910 00:42:07.363029 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-td5ft" podStartSLOduration=43.36300462 podStartE2EDuration="43.36300462s" podCreationTimestamp="2025-09-10 00:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:42:07.271881881 +0000 UTC m=+50.460789993" watchObservedRunningTime="2025-09-10 00:42:07.36300462 +0000 UTC m=+50.551912712" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.184 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0 calico-kube-controllers-6dc78c4547- calico-system 8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4 960 0 2025-09-10 00:41:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc78c4547 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dc78c4547-tl96q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad7d140b259 [] [] }} ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.185 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.223 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" HandleID="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.223 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" HandleID="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002854a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dc78c4547-tl96q", "timestamp":"2025-09-10 00:42:07.22345838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.223 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.223 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.224 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.232 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.246 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.258 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.262 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.272 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.272 [INFO][4360] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.288 [INFO][4360] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65 Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.295 [INFO][4360] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.337 [INFO][4360] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.338 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" host="localhost" Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.338 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:07.364956 env[1311]: 2025-09-10 00:42:07.338 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" HandleID="k8s-pod-network.4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.341 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0", GenerateName:"calico-kube-controllers-6dc78c4547-", Namespace:"calico-system", SelfLink:"", UID:"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc78c4547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dc78c4547-tl96q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad7d140b259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.341 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.341 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad7d140b259 ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.345 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.348 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0", GenerateName:"calico-kube-controllers-6dc78c4547-", Namespace:"calico-system", SelfLink:"", UID:"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc78c4547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65", Pod:"calico-kube-controllers-6dc78c4547-tl96q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad7d140b259", MAC:"72:15:49:16:a2:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:07.365657 env[1311]: 2025-09-10 00:42:07.362 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65" Namespace="calico-system" Pod="calico-kube-controllers-6dc78c4547-tl96q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:07.383000 audit[4381]: NETFILTER_CFG table=filter:112 family=2 entries=52 op=nft_register_chain pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:07.387357 env[1311]: time="2025-09-10T00:42:07.387241124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:07.387516 env[1311]: time="2025-09-10T00:42:07.387379881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:07.387516 env[1311]: time="2025-09-10T00:42:07.387425189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:07.387680 env[1311]: time="2025-09-10T00:42:07.387626688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65 pid=4385 runtime=io.containerd.runc.v2 Sep 10 00:42:07.388449 kernel: audit: type=1325 audit(1757464927.383:399): table=filter:112 family=2 entries=52 op=nft_register_chain pid=4381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:07.383000 audit[4381]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffdafaf3da0 a2=0 a3=7ffdafaf3d8c items=0 ppid=3538 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:07.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:07.431563 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:07.508879 env[1311]: time="2025-09-10T00:42:07.508701686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc78c4547-tl96q,Uid:8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65\"" Sep 10 00:42:07.658565 systemd-networkd[1075]: cali06a6c55b673: Gained IPv6LL Sep 10 00:42:07.901935 env[1311]: time="2025-09-10T00:42:07.901728389Z" level=info msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" iface="eth0" netns="/var/run/netns/cni-ec23366d-89de-94ae-e866-70bfe3ab0dfe" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" iface="eth0" netns="/var/run/netns/cni-ec23366d-89de-94ae-e866-70bfe3ab0dfe" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" iface="eth0" netns="/var/run/netns/cni-ec23366d-89de-94ae-e866-70bfe3ab0dfe" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.021 [INFO][4438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.053 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.054 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.054 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.061 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.061 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.063 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:08.068890 env[1311]: 2025-09-10 00:42:08.066 [INFO][4438] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:08.071204 env[1311]: time="2025-09-10T00:42:08.069113033Z" level=info msg="TearDown network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" successfully" Sep 10 00:42:08.071204 env[1311]: time="2025-09-10T00:42:08.069155364Z" level=info msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" returns successfully" Sep 10 00:42:08.071204 env[1311]: time="2025-09-10T00:42:08.070133932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxvf6,Uid:ffbcf960-312a-4e1c-84c9-bb7a1a2c101f,Namespace:kube-system,Attempt:1,}" Sep 10 00:42:08.071348 kubelet[2167]: E0910 00:42:08.069633 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:08.083054 systemd[1]: run-netns-cni\x2dec23366d\x2d89de\x2d94ae\x2de866\x2d70bfe3ab0dfe.mount: Deactivated successfully. Sep 10 00:42:08.234699 systemd-networkd[1075]: cali6e49f376bd9: Gained IPv6LL Sep 10 00:42:08.235153 systemd-networkd[1075]: cali78dfd0a03c5: Gained IPv6LL Sep 10 00:42:08.254784 kubelet[2167]: E0910 00:42:08.254592 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:08.273762 systemd-networkd[1075]: calie1d3e8ef100: Link UP Sep 10 00:42:08.278759 systemd-networkd[1075]: calie1d3e8ef100: Gained carrier Sep 10 00:42:08.288787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie1d3e8ef100: link becomes ready Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.151 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0 coredns-7c65d6cfc9- kube-system ffbcf960-312a-4e1c-84c9-bb7a1a2c101f 972 0 2025-09-10 00:41:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dxvf6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1d3e8ef100 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.152 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.191 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" HandleID="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.192 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" HandleID="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d6610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dxvf6", "timestamp":"2025-09-10 00:42:08.190505297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.192 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.192 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.193 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.203 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.217 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.227 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.232 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.238 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.238 [INFO][4470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.243 [INFO][4470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4 Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.251 [INFO][4470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.265 [INFO][4470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.265 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" host="localhost" Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.265 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:08.296576 env[1311]: 2025-09-10 00:42:08.266 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" HandleID="k8s-pod-network.addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.272 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dxvf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1d3e8ef100", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.272 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.272 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1d3e8ef100 ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.274 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.274 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4", Pod:"coredns-7c65d6cfc9-dxvf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1d3e8ef100", MAC:"e6:40:9d:e6:1c:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:08.297514 env[1311]: 2025-09-10 00:42:08.292 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxvf6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:08.308000 audit[4488]: NETFILTER_CFG table=filter:113 family=2 entries=17 op=nft_register_rule pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:08.308000 audit[4488]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc82d34320 a2=0 a3=7ffc82d3430c items=0 ppid=2276 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:08.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:08.314000 audit[4488]: NETFILTER_CFG table=nat:114 family=2 entries=35 op=nft_register_chain pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:08.314000 audit[4488]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc82d34320 a2=0 a3=7ffc82d3430c items=0 ppid=2276 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:08.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:08.330000 audit[4489]: NETFILTER_CFG table=filter:115 family=2 entries=52 op=nft_register_chain pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:08.330000 audit[4489]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7ffe5087c8f0 a2=0 a3=7ffe5087c8dc items=0 ppid=3538 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:08.330000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:08.335021 env[1311]: time="2025-09-10T00:42:08.334915216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:08.335147 env[1311]: time="2025-09-10T00:42:08.335082809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:08.335147 env[1311]: time="2025-09-10T00:42:08.335119921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:08.335416 env[1311]: time="2025-09-10T00:42:08.335369553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4 pid=4498 runtime=io.containerd.runc.v2 Sep 10 00:42:08.375544 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:08.425318 env[1311]: time="2025-09-10T00:42:08.425228714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxvf6,Uid:ffbcf960-312a-4e1c-84c9-bb7a1a2c101f,Namespace:kube-system,Attempt:1,} returns sandbox id \"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4\"" Sep 10 00:42:08.426471 kubelet[2167]: E0910 00:42:08.426430 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:08.431594 env[1311]: time="2025-09-10T00:42:08.431542685Z" level=info msg="CreateContainer within sandbox \"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:42:08.454916 env[1311]: time="2025-09-10T00:42:08.454174105Z" level=info msg="CreateContainer within sandbox \"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"769f85764e5f5d8d7d340dd684f27b873ee1bd1f3150888f0722a54084213263\"" Sep 10 00:42:08.456964 env[1311]: time="2025-09-10T00:42:08.456879695Z" level=info msg="StartContainer for \"769f85764e5f5d8d7d340dd684f27b873ee1bd1f3150888f0722a54084213263\"" Sep 10 00:42:08.609314 env[1311]: time="2025-09-10T00:42:08.608242276Z" level=info msg="StartContainer for \"769f85764e5f5d8d7d340dd684f27b873ee1bd1f3150888f0722a54084213263\" returns successfully" Sep 10 00:42:08.810587 systemd-networkd[1075]: cali59c7f603008: Gained IPv6LL Sep 10 00:42:08.903219 env[1311]: time="2025-09-10T00:42:08.902891880Z" level=info msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.968 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.969 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" iface="eth0" netns="/var/run/netns/cni-9d0c7a40-e299-ed6b-9419-f85acac7a9b6" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.969 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" iface="eth0" netns="/var/run/netns/cni-9d0c7a40-e299-ed6b-9419-f85acac7a9b6" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.969 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" iface="eth0" netns="/var/run/netns/cni-9d0c7a40-e299-ed6b-9419-f85acac7a9b6" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.969 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:08.969 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.006 [INFO][4591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.006 [INFO][4591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.006 [INFO][4591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.018 [WARNING][4591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.018 [INFO][4591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.020 [INFO][4591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:09.025006 env[1311]: 2025-09-10 00:42:09.022 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:09.026089 env[1311]: time="2025-09-10T00:42:09.025296102Z" level=info msg="TearDown network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" successfully" Sep 10 00:42:09.026089 env[1311]: time="2025-09-10T00:42:09.025403378Z" level=info msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" returns successfully" Sep 10 00:42:09.026728 env[1311]: time="2025-09-10T00:42:09.026649290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-mdkqg,Uid:7d9b8a9b-0a8b-44fd-b257-93a929c46e2c,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:42:09.088624 systemd[1]: run-containerd-runc-k8s.io-addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4-runc.rKoivq.mount: Deactivated successfully. Sep 10 00:42:09.088858 systemd[1]: run-netns-cni\x2d9d0c7a40\x2de299\x2ded6b\x2d9419\x2df85acac7a9b6.mount: Deactivated successfully. Sep 10 00:42:09.259373 kubelet[2167]: E0910 00:42:09.259288 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:09.259971 kubelet[2167]: E0910 00:42:09.259288 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:09.346138 kubelet[2167]: I0910 00:42:09.346040 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dxvf6" podStartSLOduration=45.346014809 podStartE2EDuration="45.346014809s" podCreationTimestamp="2025-09-10 00:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:42:09.345854361 +0000 UTC m=+52.534762483" watchObservedRunningTime="2025-09-10 00:42:09.346014809 +0000 UTC m=+52.534922911" Sep 10 00:42:09.401483 systemd-networkd[1075]: caliad7d140b259: Gained IPv6LL Sep 10 00:42:09.437000 audit[4620]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:09.437000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc79802140 a2=0 a3=7ffc7980212c items=0 ppid=2276 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:09.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:09.442000 audit[4620]: NETFILTER_CFG table=nat:117 family=2 entries=44 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:09.442000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc79802140 a2=0 a3=7ffc7980212c items=0 ppid=2276 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:09.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:09.545761 systemd-networkd[1075]: cali876815bc795: Link UP Sep 10 00:42:09.554296 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 10 00:42:09.554619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali876815bc795: link becomes ready Sep 10 00:42:09.554894 systemd-networkd[1075]: cali876815bc795: Gained carrier Sep 10 00:42:09.576981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619763975.mount: Deactivated successfully. Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.340 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0 calico-apiserver-c88bffbdf- calico-apiserver 7d9b8a9b-0a8b-44fd-b257-93a929c46e2c 990 0 2025-09-10 00:41:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c88bffbdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c88bffbdf-mdkqg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali876815bc795 [] [] }} ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.341 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.461 [INFO][4615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" HandleID="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.461 [INFO][4615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" HandleID="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c88bffbdf-mdkqg", "timestamp":"2025-09-10 00:42:09.461269523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.461 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.461 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.461 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.473 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.490 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.500 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.503 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.509 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.509 [INFO][4615] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.513 [INFO][4615] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24 Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.523 [INFO][4615] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.536 [INFO][4615] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.537 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" host="localhost" Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.538 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:09.583448 env[1311]: 2025-09-10 00:42:09.538 [INFO][4615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" HandleID="k8s-pod-network.49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.541 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c88bffbdf-mdkqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876815bc795", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.541 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.541 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali876815bc795 ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.555 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.555 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24", Pod:"calico-apiserver-c88bffbdf-mdkqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876815bc795", MAC:"d6:f5:31:97:94:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:09.584793 env[1311]: 2025-09-10 00:42:09.580 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24" Namespace="calico-apiserver" Pod="calico-apiserver-c88bffbdf-mdkqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:09.617215 env[1311]: time="2025-09-10T00:42:09.617097637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:42:09.617770 env[1311]: time="2025-09-10T00:42:09.617732481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:42:09.618051 env[1311]: time="2025-09-10T00:42:09.618002792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:42:09.619565 env[1311]: time="2025-09-10T00:42:09.619517352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24 pid=4642 runtime=io.containerd.runc.v2 Sep 10 00:42:09.637606 env[1311]: time="2025-09-10T00:42:09.637536206Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:09.651582 env[1311]: time="2025-09-10T00:42:09.647898050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:09.655000 audit[4665]: NETFILTER_CFG table=filter:118 family=2 entries=61 op=nft_register_chain pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 10 00:42:09.655000 audit[4665]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7ffd3fbe4910 a2=0 a3=7ffd3fbe48fc items=0 ppid=3538 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:09.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 10 00:42:09.666630 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:42:09.701528 env[1311]: time="2025-09-10T00:42:09.701449169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:09.716740 env[1311]: time="2025-09-10T00:42:09.716613170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c88bffbdf-mdkqg,Uid:7d9b8a9b-0a8b-44fd-b257-93a929c46e2c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24\"" Sep 10 00:42:09.790632 env[1311]: time="2025-09-10T00:42:09.790565084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 00:42:09.790899 env[1311]: time="2025-09-10T00:42:09.790833531Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:09.793805 env[1311]: time="2025-09-10T00:42:09.793756195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:42:09.798991 env[1311]: time="2025-09-10T00:42:09.798905441Z" level=info msg="CreateContainer within sandbox \"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:42:09.841175 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:36650.service. Sep 10 00:42:09.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.41:22-10.0.0.1:36650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:09.848681 env[1311]: time="2025-09-10T00:42:09.848593804Z" level=info msg="CreateContainer within sandbox \"43125926d4dd4130487a0ad2bb5c2a6544d608ff92e034dbace17014d9b9bc62\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c478bd0753dd90a7a7a44b076cc2704881adedf2081d772efc10a03ac86d809d\"" Sep 10 00:42:09.851208 env[1311]: time="2025-09-10T00:42:09.849511733Z" level=info msg="StartContainer for \"c478bd0753dd90a7a7a44b076cc2704881adedf2081d772efc10a03ac86d809d\"" Sep 10 00:42:09.901206 systemd-networkd[1075]: calie1d3e8ef100: Gained IPv6LL Sep 10 00:42:09.911000 audit[4678]: USER_ACCT pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:09.913693 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 36650 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:09.914000 audit[4678]: CRED_ACQ pid=4678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:09.915000 audit[4678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcfe62250 a2=3 a3=0 items=0 ppid=1 pid=4678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:09.915000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:09.920213 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:09.930828 systemd[1]: Started session-10.scope. Sep 10 00:42:09.931868 systemd-logind[1294]: New session 10 of user core. Sep 10 00:42:09.944000 audit[4678]: USER_START pid=4678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:09.947000 audit[4704]: CRED_ACQ pid=4704 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:09.967226 env[1311]: time="2025-09-10T00:42:09.967151533Z" level=info msg="StartContainer for \"c478bd0753dd90a7a7a44b076cc2704881adedf2081d772efc10a03ac86d809d\" returns successfully" Sep 10 00:42:10.132461 sshd[4678]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:10.133000 audit[4678]: USER_END pid=4678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:10.133000 audit[4678]: CRED_DISP pid=4678 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:10.136506 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:36650.service: Deactivated successfully. Sep 10 00:42:10.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.41:22-10.0.0.1:36650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:10.138038 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:42:10.138098 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:42:10.139440 systemd-logind[1294]: Removed session 10. Sep 10 00:42:10.280995 kubelet[2167]: E0910 00:42:10.280282 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:10.280995 kubelet[2167]: E0910 00:42:10.280500 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:10.324364 kubelet[2167]: I0910 00:42:10.324245 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c4f8f5f79-87rvz" podStartSLOduration=3.633369339 podStartE2EDuration="9.324219085s" podCreationTimestamp="2025-09-10 00:42:01 +0000 UTC" firstStartedPulling="2025-09-10 00:42:04.102272998 +0000 UTC m=+47.291181090" lastFinishedPulling="2025-09-10 00:42:09.793122734 +0000 UTC m=+52.982030836" observedRunningTime="2025-09-10 00:42:10.300258291 +0000 UTC m=+53.489166403" watchObservedRunningTime="2025-09-10 00:42:10.324219085 +0000 UTC m=+53.513127177" Sep 10 00:42:10.324000 audit[4743]: NETFILTER_CFG table=filter:119 family=2 entries=13 op=nft_register_rule pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:10.324000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe8d3d4c10 a2=0 a3=7ffe8d3d4bfc items=0 ppid=2276 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:10.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:10.330000 audit[4743]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:10.330000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe8d3d4c10 a2=0 a3=7ffe8d3d4bfc items=0 ppid=2276 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:10.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:10.923103 systemd-networkd[1075]: cali876815bc795: Gained IPv6LL Sep 10 00:42:11.283659 kubelet[2167]: E0910 00:42:11.283614 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:11.358000 audit[4745]: NETFILTER_CFG table=filter:121 family=2 entries=12 op=nft_register_rule pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:11.358000 audit[4745]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcfb7e9cc0 a2=0 a3=7ffcfb7e9cac items=0 ppid=2276 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:11.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:11.372000 audit[4745]: NETFILTER_CFG table=nat:122 family=2 entries=58 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:11.372000 audit[4745]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffcfb7e9cc0 a2=0 a3=7ffcfb7e9cac items=0 ppid=2276 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:11.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:12.285023 kubelet[2167]: E0910 00:42:12.284914 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:12.640859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977156363.mount: Deactivated successfully. Sep 10 00:42:13.797084 env[1311]: time="2025-09-10T00:42:13.796996933Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:13.799510 env[1311]: time="2025-09-10T00:42:13.799444377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:13.801636 env[1311]: time="2025-09-10T00:42:13.801578720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:13.803444 env[1311]: time="2025-09-10T00:42:13.803394039Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:13.804100 env[1311]: time="2025-09-10T00:42:13.804053046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 00:42:13.805543 env[1311]: time="2025-09-10T00:42:13.805509025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:42:13.806616 env[1311]: time="2025-09-10T00:42:13.806575324Z" level=info msg="CreateContainer within sandbox \"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:42:13.821521 env[1311]: time="2025-09-10T00:42:13.821449495Z" level=info msg="CreateContainer within sandbox \"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9\"" Sep 10 00:42:13.822207 env[1311]: time="2025-09-10T00:42:13.822167084Z" level=info msg="StartContainer for \"0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9\"" Sep 10 00:42:14.167148 env[1311]: time="2025-09-10T00:42:14.166925467Z" level=info msg="StartContainer for \"0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9\" returns successfully" Sep 10 00:42:14.303615 kubelet[2167]: I0910 00:42:14.303185 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-9rdtv" podStartSLOduration=29.232377468 podStartE2EDuration="36.303161903s" podCreationTimestamp="2025-09-10 00:41:38 +0000 UTC" firstStartedPulling="2025-09-10 00:42:06.734380128 +0000 UTC m=+49.923288220" lastFinishedPulling="2025-09-10 00:42:13.805164563 +0000 UTC m=+56.994072655" observedRunningTime="2025-09-10 00:42:14.303094944 +0000 UTC m=+57.492003056" watchObservedRunningTime="2025-09-10 00:42:14.303161903 +0000 UTC m=+57.492070035" Sep 10 00:42:14.319000 audit[4807]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:14.329040 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 10 00:42:14.329228 kernel: audit: type=1325 audit(1757464934.319:419): table=filter:123 family=2 entries=12 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:14.329258 kernel: audit: type=1300 audit(1757464934.319:419): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd699e1a90 a2=0 a3=7ffd699e1a7c items=0 ppid=2276 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:14.319000 audit[4807]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd699e1a90 a2=0 a3=7ffd699e1a7c items=0 ppid=2276 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:14.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:14.332368 kernel: audit: type=1327 audit(1757464934.319:419): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:14.331000 audit[4807]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:14.331000 audit[4807]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd699e1a90 a2=0 a3=7ffd699e1a7c items=0 ppid=2276 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:14.340559 kernel: audit: type=1325 audit(1757464934.331:420): table=nat:124 family=2 entries=22 op=nft_register_rule pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:14.340688 kernel: audit: type=1300 audit(1757464934.331:420): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd699e1a90 a2=0 a3=7ffd699e1a7c items=0 ppid=2276 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:14.340758 kernel: audit: type=1327 audit(1757464934.331:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:14.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:15.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.41:22-10.0.0.1:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:15.137613 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:52294.service. Sep 10 00:42:15.143374 kernel: audit: type=1130 audit(1757464935.136:421): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.41:22-10.0.0.1:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:15.185000 audit[4815]: USER_ACCT pid=4815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.185859 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 52294 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:15.189000 audit[4815]: CRED_ACQ pid=4815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.190931 sshd[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:15.193910 kernel: audit: type=1101 audit(1757464935.185:422): pid=4815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.193967 kernel: audit: type=1103 audit(1757464935.189:423): pid=4815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.193993 kernel: audit: type=1006 audit(1757464935.189:424): pid=4815 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Sep 10 00:42:15.189000 audit[4815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe57897be0 a2=3 a3=0 items=0 ppid=1 pid=4815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:15.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:15.196084 systemd-logind[1294]: New session 11 of user core. Sep 10 00:42:15.196775 systemd[1]: Started session-11.scope. Sep 10 00:42:15.201000 audit[4815]: USER_START pid=4815 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.202000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.324456 systemd[1]: run-containerd-runc-k8s.io-0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9-runc.DAaNZG.mount: Deactivated successfully. Sep 10 00:42:15.435746 sshd[4815]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:15.436000 audit[4815]: USER_END pid=4815 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.436000 audit[4815]: CRED_DISP pid=4815 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:15.438414 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:52294.service: Deactivated successfully. Sep 10 00:42:15.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.41:22-10.0.0.1:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:15.439474 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:42:15.439488 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:42:15.440515 systemd-logind[1294]: Removed session 11. Sep 10 00:42:15.457889 env[1311]: time="2025-09-10T00:42:15.457840630Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:15.460017 env[1311]: time="2025-09-10T00:42:15.459972873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:15.461668 env[1311]: time="2025-09-10T00:42:15.461637319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:15.463423 env[1311]: time="2025-09-10T00:42:15.463386577Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:15.464061 env[1311]: time="2025-09-10T00:42:15.464013811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 00:42:15.466372 env[1311]: time="2025-09-10T00:42:15.465429327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:42:15.467990 env[1311]: time="2025-09-10T00:42:15.467957041Z" level=info msg="CreateContainer within sandbox \"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:42:15.484760 env[1311]: time="2025-09-10T00:42:15.484686425Z" level=info msg="CreateContainer within sandbox \"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8eadabe5fe7c2bd6bd4a1ac79c8eb34cb16be5912e845b0331870d81837d02da\"" Sep 10 00:42:15.485420 env[1311]: time="2025-09-10T00:42:15.485392741Z" level=info msg="StartContainer for \"8eadabe5fe7c2bd6bd4a1ac79c8eb34cb16be5912e845b0331870d81837d02da\"" Sep 10 00:42:15.534639 env[1311]: time="2025-09-10T00:42:15.534593320Z" level=info msg="StartContainer for \"8eadabe5fe7c2bd6bd4a1ac79c8eb34cb16be5912e845b0331870d81837d02da\" returns successfully" Sep 10 00:42:16.890973 env[1311]: time="2025-09-10T00:42:16.890913814Z" level=info msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.949 [WARNING][4896] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" WorkloadEndpoint="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.949 [INFO][4896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.955 [INFO][4896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" iface="eth0" netns="" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.955 [INFO][4896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.955 [INFO][4896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.982 [INFO][4906] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.982 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.982 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.990 [WARNING][4906] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.990 [INFO][4906] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.993 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.001776 env[1311]: 2025-09-10 00:42:16.998 [INFO][4896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.002547 env[1311]: time="2025-09-10T00:42:17.001805775Z" level=info msg="TearDown network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" successfully" Sep 10 00:42:17.002547 env[1311]: time="2025-09-10T00:42:17.001841193Z" level=info msg="StopPodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" returns successfully" Sep 10 00:42:17.002547 env[1311]: time="2025-09-10T00:42:17.002439240Z" level=info msg="RemovePodSandbox for \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" Sep 10 00:42:17.002547 env[1311]: time="2025-09-10T00:42:17.002469097Z" level=info msg="Forcibly stopping sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\"" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.035 [WARNING][4922] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" WorkloadEndpoint="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.036 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.036 [INFO][4922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" iface="eth0" netns="" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.036 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.037 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.058 [INFO][4930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.058 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.058 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.064 [WARNING][4930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.064 [INFO][4930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" HandleID="k8s-pod-network.3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Workload="localhost-k8s-whisker--74dc4f84bd--6t868-eth0" Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.066 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.070108 env[1311]: 2025-09-10 00:42:17.068 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1" Sep 10 00:42:17.070751 env[1311]: time="2025-09-10T00:42:17.070714966Z" level=info msg="TearDown network for sandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" successfully" Sep 10 00:42:17.211708 env[1311]: time="2025-09-10T00:42:17.211641948Z" level=info msg="RemovePodSandbox \"3609889c24dd4c26f2ffca6a18f5b2773b12a5d5f4bc3f4f3897c84cf5235fe1\" returns successfully" Sep 10 00:42:17.212317 env[1311]: time="2025-09-10T00:42:17.212292486Z" level=info msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.247 [WARNING][4947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4", Pod:"coredns-7c65d6cfc9-dxvf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1d3e8ef100", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.247 [INFO][4947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.247 [INFO][4947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" iface="eth0" netns="" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.247 [INFO][4947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.247 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.272 [INFO][4956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.273 [INFO][4956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.273 [INFO][4956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.279 [WARNING][4956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.279 [INFO][4956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.280 [INFO][4956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.284710 env[1311]: 2025-09-10 00:42:17.282 [INFO][4947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.285271 env[1311]: time="2025-09-10T00:42:17.284743197Z" level=info msg="TearDown network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" successfully" Sep 10 00:42:17.285271 env[1311]: time="2025-09-10T00:42:17.284778084Z" level=info msg="StopPodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" returns successfully" Sep 10 00:42:17.285526 env[1311]: time="2025-09-10T00:42:17.285475020Z" level=info msg="RemovePodSandbox for \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" Sep 10 00:42:17.285583 env[1311]: time="2025-09-10T00:42:17.285517001Z" level=info msg="Forcibly stopping sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\"" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.317 [WARNING][4975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ffbcf960-312a-4e1c-84c9-bb7a1a2c101f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"addbe7cd94d534ed4facae862ede92ab2d261943008902ec730bd4290702c7b4", Pod:"coredns-7c65d6cfc9-dxvf6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1d3e8ef100", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.317 [INFO][4975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.317 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" iface="eth0" netns="" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.317 [INFO][4975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.317 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.338 [INFO][4984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.339 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.339 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.345 [WARNING][4984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.345 [INFO][4984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" HandleID="k8s-pod-network.1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Workload="localhost-k8s-coredns--7c65d6cfc9--dxvf6-eth0" Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.346 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.350257 env[1311]: 2025-09-10 00:42:17.348 [INFO][4975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5" Sep 10 00:42:17.350753 env[1311]: time="2025-09-10T00:42:17.350290933Z" level=info msg="TearDown network for sandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" successfully" Sep 10 00:42:17.354032 env[1311]: time="2025-09-10T00:42:17.353940992Z" level=info msg="RemovePodSandbox \"1822319b1b87a182e2f13b5dfcd21bc66175bb527a5ff625199e8ca2a63c89c5\" returns successfully" Sep 10 00:42:17.354603 env[1311]: time="2025-09-10T00:42:17.354560499Z" level=info msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.387 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0", GenerateName:"calico-kube-controllers-6dc78c4547-", Namespace:"calico-system", SelfLink:"", UID:"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc78c4547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65", Pod:"calico-kube-controllers-6dc78c4547-tl96q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad7d140b259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.387 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.387 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" iface="eth0" netns="" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.387 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.387 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.412 [INFO][5010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.412 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.413 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.418 [WARNING][5010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.419 [INFO][5010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.422 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.425856 env[1311]: 2025-09-10 00:42:17.424 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.426378 env[1311]: time="2025-09-10T00:42:17.425884842Z" level=info msg="TearDown network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" successfully" Sep 10 00:42:17.426378 env[1311]: time="2025-09-10T00:42:17.425916973Z" level=info msg="StopPodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" returns successfully" Sep 10 00:42:17.426633 env[1311]: time="2025-09-10T00:42:17.426600584Z" level=info msg="RemovePodSandbox for \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" Sep 10 00:42:17.426688 env[1311]: time="2025-09-10T00:42:17.426643075Z" level=info msg="Forcibly stopping sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\"" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.460 [WARNING][5029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0", GenerateName:"calico-kube-controllers-6dc78c4547-", Namespace:"calico-system", SelfLink:"", UID:"8fbe78dc-9a36-4f23-aeaa-9b63cfbdf2b4", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc78c4547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65", Pod:"calico-kube-controllers-6dc78c4547-tl96q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad7d140b259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.461 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.461 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" iface="eth0" netns="" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.461 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.461 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.480 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.480 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.480 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.486 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.486 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" HandleID="k8s-pod-network.31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Workload="localhost-k8s-calico--kube--controllers--6dc78c4547--tl96q-eth0" Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.488 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.491432 env[1311]: 2025-09-10 00:42:17.489 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4" Sep 10 00:42:17.492702 env[1311]: time="2025-09-10T00:42:17.491381570Z" level=info msg="TearDown network for sandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" successfully" Sep 10 00:42:17.496356 env[1311]: time="2025-09-10T00:42:17.496308567Z" level=info msg="RemovePodSandbox \"31a34bd4218a53a1b475b541f0237002bd8e0fe0677b4ba9d8e21f535f04e9e4\" returns successfully" Sep 10 00:42:17.496873 env[1311]: time="2025-09-10T00:42:17.496843602Z" level=info msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.538 [WARNING][5056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf3b90c9-768d-48d6-a148-e6a622704a6d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea", Pod:"coredns-7c65d6cfc9-td5ft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e49f376bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.538 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.538 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" iface="eth0" netns="" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.538 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.538 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.556 [INFO][5065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.556 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.556 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.562 [WARNING][5065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.562 [INFO][5065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.563 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.568310 env[1311]: 2025-09-10 00:42:17.565 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.568310 env[1311]: time="2025-09-10T00:42:17.566856870Z" level=info msg="TearDown network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" successfully" Sep 10 00:42:17.568310 env[1311]: time="2025-09-10T00:42:17.566895134Z" level=info msg="StopPodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" returns successfully" Sep 10 00:42:17.568310 env[1311]: time="2025-09-10T00:42:17.567511225Z" level=info msg="RemovePodSandbox for \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" Sep 10 00:42:17.568310 env[1311]: time="2025-09-10T00:42:17.567539769Z" level=info msg="Forcibly stopping sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\"" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.607 [WARNING][5083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf3b90c9-768d-48d6-a148-e6a622704a6d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a6f72c1579c42e1cac06c18687d88c9fc24a4843319282839f1b859e4cbd3ea", Pod:"coredns-7c65d6cfc9-td5ft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e49f376bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.608 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.608 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" iface="eth0" netns="" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.608 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.608 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.628 [INFO][5092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.628 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.628 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.633 [WARNING][5092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.634 [INFO][5092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" HandleID="k8s-pod-network.c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Workload="localhost-k8s-coredns--7c65d6cfc9--td5ft-eth0" Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.635 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.639172 env[1311]: 2025-09-10 00:42:17.637 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d" Sep 10 00:42:17.639884 env[1311]: time="2025-09-10T00:42:17.639681248Z" level=info msg="TearDown network for sandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" successfully" Sep 10 00:42:17.643649 env[1311]: time="2025-09-10T00:42:17.643587286Z" level=info msg="RemovePodSandbox \"c8546eeb21ef8b0bb97e3364d0b13519eb003c87e049b29c86b10cb349737c7d\" returns successfully" Sep 10 00:42:17.644284 env[1311]: time="2025-09-10T00:42:17.644257091Z" level=info msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.678 [WARNING][5111] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4", Pod:"calico-apiserver-c88bffbdf-qnnws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c7f603008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.678 [INFO][5111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.678 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" iface="eth0" netns="" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.678 [INFO][5111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.678 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.697 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.698 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.698 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.703 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.703 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.705 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.708393 env[1311]: 2025-09-10 00:42:17.706 [INFO][5111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.708990 env[1311]: time="2025-09-10T00:42:17.708947153Z" level=info msg="TearDown network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" successfully" Sep 10 00:42:17.709077 env[1311]: time="2025-09-10T00:42:17.709055390Z" level=info msg="StopPodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" returns successfully" Sep 10 00:42:17.709623 env[1311]: time="2025-09-10T00:42:17.709600676Z" level=info msg="RemovePodSandbox for \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" Sep 10 00:42:17.709682 env[1311]: time="2025-09-10T00:42:17.709629511Z" level=info msg="Forcibly stopping sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\"" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.739 [WARNING][5138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec1f5664-7bb7-48f1-8dd7-ff22e806c5fe", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4", Pod:"calico-apiserver-c88bffbdf-qnnws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c7f603008", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.739 [INFO][5138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.739 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" iface="eth0" netns="" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.740 [INFO][5138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.740 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.760 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.760 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.760 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.768 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.768 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" HandleID="k8s-pod-network.5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Workload="localhost-k8s-calico--apiserver--c88bffbdf--qnnws-eth0" Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.770 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.773672 env[1311]: 2025-09-10 00:42:17.771 [INFO][5138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86" Sep 10 00:42:17.774153 env[1311]: time="2025-09-10T00:42:17.773654898Z" level=info msg="TearDown network for sandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" successfully" Sep 10 00:42:17.777290 env[1311]: time="2025-09-10T00:42:17.777262505Z" level=info msg="RemovePodSandbox \"5d69602d5fc7171caa103262ed6e736e9a524711098d6b4150468a10320a2e86\" returns successfully" Sep 10 00:42:17.777824 env[1311]: time="2025-09-10T00:42:17.777781690Z" level=info msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.808 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24", Pod:"calico-apiserver-c88bffbdf-mdkqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876815bc795", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.808 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.808 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" iface="eth0" netns="" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.809 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.809 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.828 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.828 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.828 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.836 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.836 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.838 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.841649 env[1311]: 2025-09-10 00:42:17.839 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.842177 env[1311]: time="2025-09-10T00:42:17.841706565Z" level=info msg="TearDown network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" successfully" Sep 10 00:42:17.842177 env[1311]: time="2025-09-10T00:42:17.841741061Z" level=info msg="StopPodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" returns successfully" Sep 10 00:42:17.842308 env[1311]: time="2025-09-10T00:42:17.842276778Z" level=info msg="RemovePodSandbox for \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" Sep 10 00:42:17.842360 env[1311]: time="2025-09-10T00:42:17.842308829Z" level=info msg="Forcibly stopping sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\"" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.876 [WARNING][5191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0", GenerateName:"calico-apiserver-c88bffbdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d9b8a9b-0a8b-44fd-b257-93a929c46e2c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c88bffbdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24", Pod:"calico-apiserver-c88bffbdf-mdkqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali876815bc795", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.877 [INFO][5191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.877 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" iface="eth0" netns="" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.877 [INFO][5191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.877 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.896 [INFO][5200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.896 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.896 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.902 [WARNING][5200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.902 [INFO][5200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" HandleID="k8s-pod-network.3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Workload="localhost-k8s-calico--apiserver--c88bffbdf--mdkqg-eth0" Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.903 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.906983 env[1311]: 2025-09-10 00:42:17.904 [INFO][5191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d" Sep 10 00:42:17.908528 env[1311]: time="2025-09-10T00:42:17.907034199Z" level=info msg="TearDown network for sandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" successfully" Sep 10 00:42:17.911158 env[1311]: time="2025-09-10T00:42:17.911084696Z" level=info msg="RemovePodSandbox \"3cc6d199ef261ceb2dd31d6ec0c7e73e930029f72feeb8f526eafc075560281d\" returns successfully" Sep 10 00:42:17.911957 env[1311]: time="2025-09-10T00:42:17.911920197Z" level=info msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.944 [WARNING][5218] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9rdtv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7c0bb760-61ca-4fc9-a88d-45f47a6eb434", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568", Pod:"goldmane-7988f88666-9rdtv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali06a6c55b673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.944 [INFO][5218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.944 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" iface="eth0" netns="" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.944 [INFO][5218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.944 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.963 [INFO][5226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.963 [INFO][5226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.963 [INFO][5226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.969 [WARNING][5226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.969 [INFO][5226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.970 [INFO][5226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:17.974520 env[1311]: 2025-09-10 00:42:17.972 [INFO][5218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:17.975023 env[1311]: time="2025-09-10T00:42:17.974551211Z" level=info msg="TearDown network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" successfully" Sep 10 00:42:17.975023 env[1311]: time="2025-09-10T00:42:17.974600927Z" level=info msg="StopPodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" returns successfully" Sep 10 00:42:17.975364 env[1311]: time="2025-09-10T00:42:17.975292081Z" level=info msg="RemovePodSandbox for \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" Sep 10 00:42:17.975537 env[1311]: time="2025-09-10T00:42:17.975356836Z" level=info msg="Forcibly stopping sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\"" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.013 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9rdtv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7c0bb760-61ca-4fc9-a88d-45f47a6eb434", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec070135215bd4b7815240dc9735422ba91d49f2398e971841ea7fc526fa7568", Pod:"goldmane-7988f88666-9rdtv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali06a6c55b673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.014 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.014 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" iface="eth0" netns="" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.014 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.014 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.034 [INFO][5253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.034 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.035 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.041 [WARNING][5253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.041 [INFO][5253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" HandleID="k8s-pod-network.ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Workload="localhost-k8s-goldmane--7988f88666--9rdtv-eth0" Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.043 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:18.046508 env[1311]: 2025-09-10 00:42:18.044 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30" Sep 10 00:42:18.047312 env[1311]: time="2025-09-10T00:42:18.046507212Z" level=info msg="TearDown network for sandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" successfully" Sep 10 00:42:18.050519 env[1311]: time="2025-09-10T00:42:18.050481658Z" level=info msg="RemovePodSandbox \"ba25c0b2e1db2a4eeac7b973ff6f54c829f3b941771217e1961b71ee84a96e30\" returns successfully" Sep 10 00:42:18.051119 env[1311]: time="2025-09-10T00:42:18.051065587Z" level=info msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.081 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k7vf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e46e9e0-10bc-4c50-9705-59d1dee4c692", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a", Pod:"csi-node-driver-k7vf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78dfd0a03c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.081 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.082 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" iface="eth0" netns="" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.082 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.082 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.102 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.102 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.102 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.108 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.109 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.110 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:18.114413 env[1311]: 2025-09-10 00:42:18.112 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.114878 env[1311]: time="2025-09-10T00:42:18.114439364Z" level=info msg="TearDown network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" successfully" Sep 10 00:42:18.114878 env[1311]: time="2025-09-10T00:42:18.114473931Z" level=info msg="StopPodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" returns successfully" Sep 10 00:42:18.115001 env[1311]: time="2025-09-10T00:42:18.114966945Z" level=info msg="RemovePodSandbox for \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" Sep 10 00:42:18.115045 env[1311]: time="2025-09-10T00:42:18.115009847Z" level=info msg="Forcibly stopping sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\"" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.147 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k7vf2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e46e9e0-10bc-4c50-9705-59d1dee4c692", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a", Pod:"csi-node-driver-k7vf2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78dfd0a03c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.147 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.147 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" iface="eth0" netns="" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.147 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.147 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.171 [INFO][5307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.171 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.171 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.177 [WARNING][5307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.177 [INFO][5307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" HandleID="k8s-pod-network.8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Workload="localhost-k8s-csi--node--driver--k7vf2-eth0" Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.178 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:42:18.182585 env[1311]: 2025-09-10 00:42:18.180 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86" Sep 10 00:42:18.183250 env[1311]: time="2025-09-10T00:42:18.183186477Z" level=info msg="TearDown network for sandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" successfully" Sep 10 00:42:18.223735 env[1311]: time="2025-09-10T00:42:18.223680392Z" level=info msg="RemovePodSandbox \"8a5fdebec74071de8eebd0515f789d5dd6c55a357e688d23ae903802cb302c86\" returns successfully" Sep 10 00:42:18.267520 env[1311]: time="2025-09-10T00:42:18.267476635Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:18.269663 env[1311]: time="2025-09-10T00:42:18.269612768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:18.271414 env[1311]: time="2025-09-10T00:42:18.271381327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:18.272929 env[1311]: time="2025-09-10T00:42:18.272888535Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:18.273404 env[1311]: time="2025-09-10T00:42:18.273370789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:42:18.278606 env[1311]: time="2025-09-10T00:42:18.276530754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:42:18.278606 env[1311]: time="2025-09-10T00:42:18.277458362Z" level=info msg="CreateContainer within sandbox \"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:42:18.288459 env[1311]: time="2025-09-10T00:42:18.288415457Z" level=info msg="CreateContainer within sandbox \"36b1ab3f4b7ad075e0d8293c4e9782c0d4e67406891e95097ee43732c907c8c4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88b596b1dbf5d3990886cef6eabecf0acdd1921ac2517823294267abf0c73101\"" Sep 10 00:42:18.289362 env[1311]: time="2025-09-10T00:42:18.288936086Z" level=info msg="StartContainer for \"88b596b1dbf5d3990886cef6eabecf0acdd1921ac2517823294267abf0c73101\"" Sep 10 00:42:18.351639 env[1311]: time="2025-09-10T00:42:18.351498017Z" level=info msg="StartContainer for \"88b596b1dbf5d3990886cef6eabecf0acdd1921ac2517823294267abf0c73101\" returns successfully" Sep 10 00:42:19.331051 kubelet[2167]: I0910 00:42:19.330980 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c88bffbdf-qnnws" podStartSLOduration=31.9964375 podStartE2EDuration="43.330961621s" podCreationTimestamp="2025-09-10 00:41:36 +0000 UTC" firstStartedPulling="2025-09-10 00:42:06.939675275 +0000 UTC m=+50.128583368" lastFinishedPulling="2025-09-10 00:42:18.274199397 +0000 UTC m=+61.463107489" observedRunningTime="2025-09-10 00:42:19.330184943 +0000 UTC m=+62.519093035" watchObservedRunningTime="2025-09-10 00:42:19.330961621 +0000 UTC m=+62.519869703" Sep 10 00:42:19.346000 audit[5355]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:19.347773 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 10 00:42:19.347853 kernel: audit: type=1325 audit(1757464939.346:430): table=filter:125 family=2 entries=12 op=nft_register_rule pid=5355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:19.346000 audit[5355]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff56e1fdb0 a2=0 a3=7fff56e1fd9c items=0 ppid=2276 pid=5355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:19.355615 kernel: audit: type=1300 audit(1757464939.346:430): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff56e1fdb0 a2=0 a3=7fff56e1fd9c items=0 ppid=2276 pid=5355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:19.355682 kernel: audit: type=1327 audit(1757464939.346:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:19.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:19.359000 audit[5355]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:19.359000 audit[5355]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff56e1fdb0 a2=0 a3=7fff56e1fd9c items=0 ppid=2276 pid=5355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:19.367021 kernel: audit: type=1325 audit(1757464939.359:431): table=nat:126 family=2 entries=22 op=nft_register_rule pid=5355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:19.367069 kernel: audit: type=1300 audit(1757464939.359:431): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff56e1fdb0 a2=0 a3=7fff56e1fd9c items=0 ppid=2276 pid=5355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:19.367091 kernel: audit: type=1327 audit(1757464939.359:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:19.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:20.323229 kubelet[2167]: I0910 00:42:20.323181 2167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:42:20.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.41:22-10.0.0.1:59272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:20.438686 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:59272.service. Sep 10 00:42:20.444376 kernel: audit: type=1130 audit(1757464940.438:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.41:22-10.0.0.1:59272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:20.476000 audit[5358]: USER_ACCT pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.477675 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 59272 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:20.486400 kernel: audit: type=1101 audit(1757464940.476:433): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.486539 kernel: audit: type=1103 audit(1757464940.483:434): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.483000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.485056 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:20.490506 systemd-logind[1294]: New session 12 of user core. Sep 10 00:42:20.491290 kernel: audit: type=1006 audit(1757464940.484:435): pid=5358 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 10 00:42:20.484000 audit[5358]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff2ea2b260 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:20.484000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:20.491267 systemd[1]: Started session-12.scope. Sep 10 00:42:20.499000 audit[5358]: USER_START pid=5358 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.500000 audit[5361]: CRED_ACQ pid=5361 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.907246 sshd[5358]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:20.908000 audit[5358]: USER_END pid=5358 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.908000 audit[5358]: CRED_DISP pid=5358 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:20.910522 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:59272.service: Deactivated successfully. Sep 10 00:42:20.911651 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:42:20.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.41:22-10.0.0.1:59272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:20.912980 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:42:20.913993 systemd-logind[1294]: Removed session 12. Sep 10 00:42:21.618232 systemd[1]: run-containerd-runc-k8s.io-0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9-runc.vSy9pb.mount: Deactivated successfully. Sep 10 00:42:21.721000 audit[5395]: NETFILTER_CFG table=filter:127 family=2 entries=11 op=nft_register_rule pid=5395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:21.721000 audit[5395]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff263bc2c0 a2=0 a3=7fff263bc2ac items=0 ppid=2276 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:21.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:21.728000 audit[5395]: NETFILTER_CFG table=nat:128 family=2 entries=29 op=nft_register_chain pid=5395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:21.728000 audit[5395]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff263bc2c0 a2=0 a3=7fff263bc2ac items=0 ppid=2276 pid=5395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:21.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:22.318457 env[1311]: time="2025-09-10T00:42:22.318384153Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.321120 env[1311]: time="2025-09-10T00:42:22.321077092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.322945 env[1311]: time="2025-09-10T00:42:22.322903685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.327252 env[1311]: time="2025-09-10T00:42:22.327208727Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.327779 env[1311]: time="2025-09-10T00:42:22.327739621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 00:42:22.329100 env[1311]: time="2025-09-10T00:42:22.329074494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:42:22.340631 env[1311]: time="2025-09-10T00:42:22.340584481Z" level=info msg="CreateContainer within sandbox \"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:42:22.360070 env[1311]: time="2025-09-10T00:42:22.360009824Z" level=info msg="CreateContainer within sandbox \"4c553d0aab75b32a55630ba1a35d9f3b64d9373aef38707190d0df5ec37fcd65\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a64a260a9c40e62b0a8a8b13bbd0d2df76d0e6779c5c4c3613196fd3278311a\"" Sep 10 00:42:22.360736 env[1311]: time="2025-09-10T00:42:22.360695074Z" level=info msg="StartContainer for \"4a64a260a9c40e62b0a8a8b13bbd0d2df76d0e6779c5c4c3613196fd3278311a\"" Sep 10 00:42:22.697868 env[1311]: time="2025-09-10T00:42:22.697748294Z" level=info msg="StartContainer for \"4a64a260a9c40e62b0a8a8b13bbd0d2df76d0e6779c5c4c3613196fd3278311a\" returns successfully" Sep 10 00:42:22.716360 env[1311]: time="2025-09-10T00:42:22.716283264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.718572 env[1311]: time="2025-09-10T00:42:22.718472952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.720215 env[1311]: time="2025-09-10T00:42:22.720158494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.721854 env[1311]: time="2025-09-10T00:42:22.721813349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:22.722578 env[1311]: time="2025-09-10T00:42:22.722536732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:42:22.724871 env[1311]: time="2025-09-10T00:42:22.724833633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:42:22.725909 env[1311]: time="2025-09-10T00:42:22.725853814Z" level=info msg="CreateContainer within sandbox \"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:42:22.743267 env[1311]: time="2025-09-10T00:42:22.743203277Z" level=info msg="CreateContainer within sandbox \"49c76793da4f91c878bde1de37db3e03d051ea7c71205fd36632449db1d85a24\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba6719141d4c8e16d28c57965b1826b415b7f30ccd595139f15ef061c4271fe9\"" Sep 10 00:42:22.744009 env[1311]: time="2025-09-10T00:42:22.743975564Z" level=info msg="StartContainer for \"ba6719141d4c8e16d28c57965b1826b415b7f30ccd595139f15ef061c4271fe9\"" Sep 10 00:42:22.949573 env[1311]: time="2025-09-10T00:42:22.949397786Z" level=info msg="StartContainer for \"ba6719141d4c8e16d28c57965b1826b415b7f30ccd595139f15ef061c4271fe9\" returns successfully" Sep 10 00:42:23.376125 kubelet[2167]: I0910 00:42:23.375938 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dc78c4547-tl96q" podStartSLOduration=29.557181663 podStartE2EDuration="44.375914418s" podCreationTimestamp="2025-09-10 00:41:39 +0000 UTC" firstStartedPulling="2025-09-10 00:42:07.51013336 +0000 UTC m=+50.699041452" lastFinishedPulling="2025-09-10 00:42:22.328866115 +0000 UTC m=+65.517774207" observedRunningTime="2025-09-10 00:42:23.375626287 +0000 UTC m=+66.564534419" watchObservedRunningTime="2025-09-10 00:42:23.375914418 +0000 UTC m=+66.564822510" Sep 10 00:42:23.407871 kubelet[2167]: I0910 00:42:23.407778 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c88bffbdf-mdkqg" podStartSLOduration=34.405091291 podStartE2EDuration="47.407753455s" podCreationTimestamp="2025-09-10 00:41:36 +0000 UTC" firstStartedPulling="2025-09-10 00:42:09.721213618 +0000 UTC m=+52.910121710" lastFinishedPulling="2025-09-10 00:42:22.723875792 +0000 UTC m=+65.912783874" observedRunningTime="2025-09-10 00:42:23.393259987 +0000 UTC m=+66.582168099" watchObservedRunningTime="2025-09-10 00:42:23.407753455 +0000 UTC m=+66.596661547" Sep 10 00:42:23.406000 audit[5508]: NETFILTER_CFG table=filter:129 family=2 entries=10 op=nft_register_rule pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:23.406000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffefe0f1f10 a2=0 a3=7ffefe0f1efc items=0 ppid=2276 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:23.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:23.416000 audit[5508]: NETFILTER_CFG table=nat:130 family=2 entries=24 op=nft_register_rule pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:23.416000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffefe0f1f10 a2=0 a3=7ffefe0f1efc items=0 ppid=2276 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:23.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:24.337583 kubelet[2167]: I0910 00:42:24.337052 2167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:42:24.578639 env[1311]: time="2025-09-10T00:42:24.578537648Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:24.581021 env[1311]: time="2025-09-10T00:42:24.580979151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:24.583451 env[1311]: time="2025-09-10T00:42:24.583399304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:24.585437 env[1311]: time="2025-09-10T00:42:24.585389485Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:42:24.586041 env[1311]: time="2025-09-10T00:42:24.585971618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 00:42:24.588634 env[1311]: time="2025-09-10T00:42:24.588516840Z" level=info msg="CreateContainer within sandbox \"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:42:24.609344 env[1311]: time="2025-09-10T00:42:24.609236708Z" level=info msg="CreateContainer within sandbox \"a6c30a047a88d78fa4a74ffae59637c771fd4b8c8e7f1f58938e7040bd27ea7a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45d58d0c23608c79c5fc80aef213356ada1706716035d7a9ab5a776f89f54838\"" Sep 10 00:42:24.610288 env[1311]: time="2025-09-10T00:42:24.609977584Z" level=info msg="StartContainer for \"45d58d0c23608c79c5fc80aef213356ada1706716035d7a9ab5a776f89f54838\"" Sep 10 00:42:24.639252 systemd[1]: run-containerd-runc-k8s.io-45d58d0c23608c79c5fc80aef213356ada1706716035d7a9ab5a776f89f54838-runc.YIiqhW.mount: Deactivated successfully. Sep 10 00:42:24.674139 env[1311]: time="2025-09-10T00:42:24.673105520Z" level=info msg="StartContainer for \"45d58d0c23608c79c5fc80aef213356ada1706716035d7a9ab5a776f89f54838\" returns successfully" Sep 10 00:42:25.037309 kubelet[2167]: I0910 00:42:25.037243 2167 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:42:25.039004 kubelet[2167]: I0910 00:42:25.038963 2167 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:42:25.404594 kubelet[2167]: I0910 00:42:25.404428 2167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k7vf2" podStartSLOduration=29.650143284 podStartE2EDuration="47.404407857s" podCreationTimestamp="2025-09-10 00:41:38 +0000 UTC" firstStartedPulling="2025-09-10 00:42:06.832766869 +0000 UTC m=+50.021674961" lastFinishedPulling="2025-09-10 00:42:24.587031442 +0000 UTC m=+67.775939534" observedRunningTime="2025-09-10 00:42:25.404126631 +0000 UTC m=+68.593034723" watchObservedRunningTime="2025-09-10 00:42:25.404407857 +0000 UTC m=+68.593315949" Sep 10 00:42:25.911056 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:59284.service. Sep 10 00:42:25.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.41:22-10.0.0.1:59284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:25.913030 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 10 00:42:25.913122 kernel: audit: type=1130 audit(1757464945.909:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.41:22-10.0.0.1:59284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:25.949000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.950951 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 59284 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:25.953128 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:25.951000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.958048 systemd-logind[1294]: New session 13 of user core. Sep 10 00:42:25.958702 kernel: audit: type=1101 audit(1757464945.949:446): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.958754 kernel: audit: type=1103 audit(1757464945.951:447): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.958785 kernel: audit: type=1006 audit(1757464945.951:448): pid=5545 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 10 00:42:25.959084 systemd[1]: Started session-13.scope. Sep 10 00:42:25.964801 kernel: audit: type=1300 audit(1757464945.951:448): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce8a4ac40 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:25.951000 audit[5545]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce8a4ac40 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:25.966625 kernel: audit: type=1327 audit(1757464945.951:448): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:25.951000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:25.962000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.970851 kernel: audit: type=1105 audit(1757464945.962:449): pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.970988 kernel: audit: type=1103 audit(1757464945.963:450): pid=5548 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:25.963000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.141145 sshd[5545]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:26.141000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.144984 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:59298.service. Sep 10 00:42:26.145769 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:59284.service: Deactivated successfully. Sep 10 00:42:26.141000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.147322 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:42:26.148528 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:42:26.149985 systemd-logind[1294]: Removed session 13. Sep 10 00:42:26.150925 kernel: audit: type=1106 audit(1757464946.141:451): pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.151072 kernel: audit: type=1104 audit(1757464946.141:452): pid=5545 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.41:22-10.0.0.1:59298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:26.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.41:22-10.0.0.1:59284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:26.179000 audit[5559]: USER_ACCT pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.180909 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 59298 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:26.180000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.180000 audit[5559]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3c5b7bf0 a2=3 a3=0 items=0 ppid=1 pid=5559 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:26.180000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:26.182293 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:26.186216 systemd-logind[1294]: New session 14 of user core. Sep 10 00:42:26.187171 systemd[1]: Started session-14.scope. Sep 10 00:42:26.190000 audit[5559]: USER_START pid=5559 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.192000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.343814 sshd[5559]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:26.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.41:22-10.0.0.1:59310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:26.344574 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:59310.service. Sep 10 00:42:26.351000 audit[5559]: USER_END pid=5559 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.351000 audit[5559]: CRED_DISP pid=5559 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.356075 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:59298.service: Deactivated successfully. Sep 10 00:42:26.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.41:22-10.0.0.1:59298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:26.356997 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:42:26.360752 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:42:26.362021 systemd-logind[1294]: Removed session 14. Sep 10 00:42:26.383000 audit[5571]: USER_ACCT pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.385323 sshd[5571]: Accepted publickey for core from 10.0.0.1 port 59310 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:26.384000 audit[5571]: CRED_ACQ pid=5571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.384000 audit[5571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0eda09c0 a2=3 a3=0 items=0 ppid=1 pid=5571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:26.384000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:26.386478 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:26.390085 systemd-logind[1294]: New session 15 of user core. Sep 10 00:42:26.390896 systemd[1]: Started session-15.scope. Sep 10 00:42:26.393000 audit[5571]: USER_START pid=5571 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.395000 audit[5576]: CRED_ACQ pid=5576 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.502960 sshd[5571]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:26.502000 audit[5571]: USER_END pid=5571 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.502000 audit[5571]: CRED_DISP pid=5571 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:26.505402 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:59310.service: Deactivated successfully. Sep 10 00:42:26.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.41:22-10.0.0.1:59310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:26.506697 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:42:26.506765 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:42:26.507482 systemd-logind[1294]: Removed session 15. Sep 10 00:42:28.059561 kubelet[2167]: I0910 00:42:28.059500 2167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:42:28.807000 audit[5588]: NETFILTER_CFG table=filter:131 family=2 entries=9 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:28.807000 audit[5588]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe31427f60 a2=0 a3=7ffe31427f4c items=0 ppid=2276 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:28.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:28.813000 audit[5588]: NETFILTER_CFG table=nat:132 family=2 entries=31 op=nft_register_chain pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:28.813000 audit[5588]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe31427f60 a2=0 a3=7ffe31427f4c items=0 ppid=2276 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:28.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:28.904953 kubelet[2167]: E0910 00:42:28.904892 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:31.511505 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:46862.service. Sep 10 00:42:31.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:46862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:31.513712 kernel: kauditd_printk_skb: 29 callbacks suppressed Sep 10 00:42:31.513888 kernel: audit: type=1130 audit(1757464951.510:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:46862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:31.560000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.567102 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 46862 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:31.567551 kernel: audit: type=1101 audit(1757464951.560:475): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.567609 kernel: audit: type=1103 audit(1757464951.565:476): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.565000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.567569 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:31.573375 kernel: audit: type=1006 audit(1757464951.565:477): pid=5614 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 10 00:42:31.573584 kernel: audit: type=1300 audit(1757464951.565:477): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8a752f60 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:31.565000 audit[5614]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8a752f60 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:31.575009 systemd[1]: Started session-16.scope. Sep 10 00:42:31.575450 systemd-logind[1294]: New session 16 of user core. Sep 10 00:42:31.565000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:31.578613 kernel: audit: type=1327 audit(1757464951.565:477): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:31.582000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.582000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.591939 kernel: audit: type=1105 audit(1757464951.582:478): pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.592039 kernel: audit: type=1103 audit(1757464951.582:479): pid=5617 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.791225 sshd[5614]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:31.791000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.794085 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:46862.service: Deactivated successfully. Sep 10 00:42:31.795365 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:42:31.795509 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:42:31.796275 systemd-logind[1294]: Removed session 16. Sep 10 00:42:31.791000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.801626 kernel: audit: type=1106 audit(1757464951.791:480): pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.801665 kernel: audit: type=1104 audit(1757464951.791:481): pid=5614 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:31.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.41:22-10.0.0.1:46862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:36.795247 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:46872.service. Sep 10 00:42:36.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:46872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:36.796925 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 10 00:42:36.796998 kernel: audit: type=1130 audit(1757464956.794:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:46872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:36.824000 audit[5628]: USER_ACCT pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.825831 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 46872 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:36.828211 sshd[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:36.826000 audit[5628]: CRED_ACQ pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.832109 systemd-logind[1294]: New session 17 of user core. Sep 10 00:42:36.833137 systemd[1]: Started session-17.scope. Sep 10 00:42:36.833781 kernel: audit: type=1101 audit(1757464956.824:484): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.833836 kernel: audit: type=1103 audit(1757464956.826:485): pid=5628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.834388 kernel: audit: type=1006 audit(1757464956.826:486): pid=5628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 10 00:42:36.826000 audit[5628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd15c6f870 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:36.840696 kernel: audit: type=1300 audit(1757464956.826:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd15c6f870 a2=3 a3=0 items=0 ppid=1 pid=5628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:36.840779 kernel: audit: type=1327 audit(1757464956.826:486): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:36.826000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:36.842119 kernel: audit: type=1105 audit(1757464956.837:487): pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.837000 audit[5628]: USER_START pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.838000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.851321 kernel: audit: type=1103 audit(1757464956.838:488): pid=5631 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.982672 sshd[5628]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:36.982000 audit[5628]: USER_END pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.986134 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:46872.service: Deactivated successfully. Sep 10 00:42:36.987731 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:42:36.988455 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:42:36.982000 audit[5628]: CRED_DISP pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.989875 systemd-logind[1294]: Removed session 17. Sep 10 00:42:36.993060 kernel: audit: type=1106 audit(1757464956.982:489): pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.993162 kernel: audit: type=1104 audit(1757464956.982:490): pid=5628 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:36.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.41:22-10.0.0.1:46872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:38.901457 kubelet[2167]: E0910 00:42:38.901389 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:41.986006 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:51852.service. Sep 10 00:42:41.987988 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 10 00:42:41.988082 kernel: audit: type=1130 audit(1757464961.985:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:51852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:41.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:51852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:42.024000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.025812 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 51852 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:42.029000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.030444 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:42.034612 kernel: audit: type=1101 audit(1757464962.024:493): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.034674 kernel: audit: type=1103 audit(1757464962.029:494): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.034700 kernel: audit: type=1006 audit(1757464962.029:495): pid=5646 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 10 00:42:42.034099 systemd-logind[1294]: New session 18 of user core. Sep 10 00:42:42.035160 systemd[1]: Started session-18.scope. Sep 10 00:42:42.029000 audit[5646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0fc84590 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:42.040263 kernel: audit: type=1300 audit(1757464962.029:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0fc84590 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:42.029000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:42.038000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.046246 kernel: audit: type=1327 audit(1757464962.029:495): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:42.046300 kernel: audit: type=1105 audit(1757464962.038:496): pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.046391 kernel: audit: type=1103 audit(1757464962.040:497): pid=5649 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.040000 audit[5649]: CRED_ACQ pid=5649 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.213899 sshd[5646]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:42.213000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.216800 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:51852.service: Deactivated successfully. Sep 10 00:42:42.218120 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:42:42.218523 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:42:42.213000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.219467 systemd-logind[1294]: Removed session 18. Sep 10 00:42:42.223757 kernel: audit: type=1106 audit(1757464962.213:498): pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.223827 kernel: audit: type=1104 audit(1757464962.213:499): pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:42.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.41:22-10.0.0.1:51852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:42.868801 kubelet[2167]: I0910 00:42:42.868752 2167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:42:42.905000 audit[5667]: NETFILTER_CFG table=filter:133 family=2 entries=8 op=nft_register_rule pid=5667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:42.905000 audit[5667]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffedd7a53a0 a2=0 a3=7ffedd7a538c items=0 ppid=2276 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:42.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:42.911000 audit[5667]: NETFILTER_CFG table=nat:134 family=2 entries=38 op=nft_register_chain pid=5667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:42.911000 audit[5667]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffedd7a53a0 a2=0 a3=7ffedd7a538c items=0 ppid=2276 pid=5667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:42.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:47.218475 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:51858.service. Sep 10 00:42:47.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.41:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.220024 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 10 00:42:47.220186 kernel: audit: type=1130 audit(1757464967.217:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.41:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.252000 audit[5668]: USER_ACCT pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.254178 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:47.255880 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:47.254000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.259762 systemd-logind[1294]: New session 19 of user core. Sep 10 00:42:47.260526 systemd[1]: Started session-19.scope. Sep 10 00:42:47.261997 kernel: audit: type=1101 audit(1757464967.252:504): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.262055 kernel: audit: type=1103 audit(1757464967.254:505): pid=5668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.262077 kernel: audit: type=1006 audit(1757464967.254:506): pid=5668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 10 00:42:47.264652 kernel: audit: type=1300 audit(1757464967.254:506): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9ed0ac70 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:47.254000 audit[5668]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9ed0ac70 a2=3 a3=0 items=0 ppid=1 pid=5668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:47.254000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:47.270823 kernel: audit: type=1327 audit(1757464967.254:506): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:47.270879 kernel: audit: type=1105 audit(1757464967.263:507): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.263000 audit[5668]: USER_START pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.265000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.280483 kernel: audit: type=1103 audit(1757464967.265:508): pid=5671 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.397236 sshd[5668]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:47.397000 audit[5668]: USER_END pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.399924 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:51866.service. Sep 10 00:42:47.404362 kernel: audit: type=1106 audit(1757464967.397:509): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.397000 audit[5668]: CRED_DISP pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.404941 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:51858.service: Deactivated successfully. Sep 10 00:42:47.405832 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:42:47.406881 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:42:47.407812 systemd-logind[1294]: Removed session 19. Sep 10 00:42:47.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.41:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.41:22-10.0.0.1:51858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.409383 kernel: audit: type=1104 audit(1757464967.397:510): pid=5668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.434000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.436387 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:47.435000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.436000 audit[5680]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe2019520 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:47.436000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:47.437738 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:47.442677 systemd-logind[1294]: New session 20 of user core. Sep 10 00:42:47.443884 systemd[1]: Started session-20.scope. Sep 10 00:42:47.448000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.450000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.779161 sshd[5680]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:47.780000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.780000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.783052 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:51868.service. Sep 10 00:42:47.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.41:22-10.0.0.1:51868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.784169 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:51866.service: Deactivated successfully. Sep 10 00:42:47.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.41:22-10.0.0.1:51866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:47.786087 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:42:47.786149 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:42:47.787764 systemd-logind[1294]: Removed session 20. Sep 10 00:42:47.822000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.824043 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:47.824000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.824000 audit[5693]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff89392b30 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:47.824000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:47.825889 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:47.831561 systemd-logind[1294]: New session 21 of user core. Sep 10 00:42:47.832618 systemd[1]: Started session-21.scope. Sep 10 00:42:47.840000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.842000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:47.905884 kubelet[2167]: E0910 00:42:47.905824 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:49.870000 audit[5711]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:49.870000 audit[5711]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc0289a8f0 a2=0 a3=7ffc0289a8dc items=0 ppid=2276 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:49.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:49.877000 audit[5711]: NETFILTER_CFG table=nat:136 family=2 entries=26 op=nft_register_rule pid=5711 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:49.877000 audit[5711]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc0289a8f0 a2=0 a3=0 items=0 ppid=2276 pid=5711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:49.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:49.900000 audit[5713]: NETFILTER_CFG table=filter:137 family=2 entries=32 op=nft_register_rule pid=5713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:49.900000 audit[5713]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffeec203c60 a2=0 a3=7ffeec203c4c items=0 ppid=2276 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:49.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:49.907000 audit[5713]: NETFILTER_CFG table=nat:138 family=2 entries=26 op=nft_register_rule pid=5713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:49.907000 audit[5713]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffeec203c60 a2=0 a3=0 items=0 ppid=2276 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:49.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:50.112965 sshd[5693]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:50.116320 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:36624.service. Sep 10 00:42:50.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:36624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:50.118000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.118000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.41:22-10.0.0.1:51868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:50.120913 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:51868.service: Deactivated successfully. Sep 10 00:42:50.122556 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:42:50.123105 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:42:50.125411 systemd-logind[1294]: Removed session 21. Sep 10 00:42:50.164000 audit[5714]: USER_ACCT pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.165434 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 36624 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:50.167172 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:50.166000 audit[5714]: CRED_ACQ pid=5714 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.166000 audit[5714]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3d8a35c0 a2=3 a3=0 items=0 ppid=1 pid=5714 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:50.166000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:50.174888 systemd[1]: Started session-22.scope. Sep 10 00:42:50.175408 systemd-logind[1294]: New session 22 of user core. Sep 10 00:42:50.183000 audit[5714]: USER_START pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.186000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.906029 sshd[5714]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:50.914273 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:36626.service. Sep 10 00:42:50.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.41:22-10.0.0.1:36626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:50.915000 audit[5714]: USER_END pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.915000 audit[5714]: CRED_DISP pid=5714 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.41:22-10.0.0.1:36624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:50.917709 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:36624.service: Deactivated successfully. Sep 10 00:42:50.919985 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:42:50.920916 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:42:50.922701 systemd-logind[1294]: Removed session 22. Sep 10 00:42:50.962000 audit[5726]: USER_ACCT pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.965093 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 36626 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:50.967000 audit[5726]: CRED_ACQ pid=5726 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:50.967000 audit[5726]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9129b060 a2=3 a3=0 items=0 ppid=1 pid=5726 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:50.967000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:50.968586 sshd[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:50.978774 systemd-logind[1294]: New session 23 of user core. Sep 10 00:42:50.982651 systemd[1]: Started session-23.scope. Sep 10 00:42:51.003000 audit[5726]: USER_START pid=5726 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:51.005000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:51.154455 sshd[5726]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:51.155000 audit[5726]: USER_END pid=5726 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:51.156000 audit[5726]: CRED_DISP pid=5726 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:51.158584 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:36626.service: Deactivated successfully. Sep 10 00:42:51.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.41:22-10.0.0.1:36626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:51.159987 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:42:51.160029 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:42:51.161211 systemd-logind[1294]: Removed session 23. Sep 10 00:42:51.647467 systemd[1]: run-containerd-runc-k8s.io-0e1f078181a045a629d0ed03c2ee0b97da1b62512104f784788bdb064150c3b9-runc.BnZuuV.mount: Deactivated successfully. Sep 10 00:42:56.158441 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:36634.service. Sep 10 00:42:56.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:56.160380 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 10 00:42:56.160463 kernel: audit: type=1130 audit(1757464976.158:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:56.202000 audit[5787]: USER_ACCT pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.202790 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 36634 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:42:56.205482 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:42:56.204000 audit[5787]: CRED_ACQ pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.210201 systemd-logind[1294]: New session 24 of user core. Sep 10 00:42:56.211281 systemd[1]: Started session-24.scope. Sep 10 00:42:56.212283 kernel: audit: type=1101 audit(1757464976.202:553): pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.212422 kernel: audit: type=1103 audit(1757464976.204:554): pid=5787 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.204000 audit[5787]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcca4f48b0 a2=3 a3=0 items=0 ppid=1 pid=5787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:56.220555 kernel: audit: type=1006 audit(1757464976.204:555): pid=5787 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 10 00:42:56.220700 kernel: audit: type=1300 audit(1757464976.204:555): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcca4f48b0 a2=3 a3=0 items=0 ppid=1 pid=5787 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:56.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:56.222708 kernel: audit: type=1327 audit(1757464976.204:555): proctitle=737368643A20636F7265205B707269765D Sep 10 00:42:56.222794 kernel: audit: type=1105 audit(1757464976.221:556): pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.221000 audit[5787]: USER_START pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.223000 audit[5790]: CRED_ACQ pid=5790 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.232530 kernel: audit: type=1103 audit(1757464976.223:557): pid=5790 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.490081 sshd[5787]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:56.491000 audit[5787]: USER_END pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.493316 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:36634.service: Deactivated successfully. Sep 10 00:42:56.494256 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:42:56.491000 audit[5787]: CRED_DISP pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.499267 systemd-logind[1294]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:42:56.501595 kernel: audit: type=1106 audit(1757464976.491:558): pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.501645 kernel: audit: type=1104 audit(1757464976.491:559): pid=5787 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:42:56.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.41:22-10.0.0.1:36634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:42:56.500883 systemd-logind[1294]: Removed session 24. Sep 10 00:42:57.836000 audit[5802]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=5802 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:57.836000 audit[5802]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe00596030 a2=0 a3=7ffe0059601c items=0 ppid=2276 pid=5802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:57.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:57.850000 audit[5802]: NETFILTER_CFG table=nat:140 family=2 entries=110 op=nft_register_chain pid=5802 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 10 00:42:57.850000 audit[5802]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffe00596030 a2=0 a3=7ffe0059601c items=0 ppid=2276 pid=5802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:42:57.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 10 00:42:58.901082 kubelet[2167]: E0910 00:42:58.900997 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:43:01.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:43024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:01.498521 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 10 00:43:01.501241 kernel: audit: type=1130 audit(1757464981.496:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:43024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:01.496892 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:43024.service. Sep 10 00:43:01.569000 audit[5826]: USER_ACCT pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.575193 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:43:01.582051 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:43:01.584169 kernel: audit: type=1101 audit(1757464981.569:564): pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.584255 kernel: audit: type=1103 audit(1757464981.581:565): pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.581000 audit[5826]: CRED_ACQ pid=5826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.593148 kernel: audit: type=1006 audit(1757464981.581:566): pid=5826 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 10 00:43:01.597891 systemd-logind[1294]: New session 25 of user core. Sep 10 00:43:01.581000 audit[5826]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd10d5bc80 a2=3 a3=0 items=0 ppid=1 pid=5826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:01.599878 systemd[1]: Started session-25.scope. Sep 10 00:43:01.606161 kernel: audit: type=1300 audit(1757464981.581:566): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd10d5bc80 a2=3 a3=0 items=0 ppid=1 pid=5826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:01.606348 kernel: audit: type=1327 audit(1757464981.581:566): proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:01.581000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:01.613000 audit[5826]: USER_START pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.616000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.624252 kernel: audit: type=1105 audit(1757464981.613:567): pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.624636 kernel: audit: type=1103 audit(1757464981.616:568): pid=5829 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.864115 sshd[5826]: pam_unix(sshd:session): session closed for user core Sep 10 00:43:01.864000 audit[5826]: USER_END pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.868753 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:43024.service: Deactivated successfully. Sep 10 00:43:01.864000 audit[5826]: CRED_DISP pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.870512 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:43:01.871932 systemd-logind[1294]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:43:01.878504 kernel: audit: type=1106 audit(1757464981.864:569): pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.878601 kernel: audit: type=1104 audit(1757464981.864:570): pid=5826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:01.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.41:22-10.0.0.1:43024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:01.876503 systemd-logind[1294]: Removed session 25. Sep 10 00:43:06.864765 systemd[1]: Started sshd@25-10.0.0.41:22-10.0.0.1:43040.service. Sep 10 00:43:06.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.41:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:06.865876 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 10 00:43:06.866048 kernel: audit: type=1130 audit(1757464986.864:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.41:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:06.913000 audit[5840]: USER_ACCT pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.913733 sshd[5840]: Accepted publickey for core from 10.0.0.1 port 43040 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:43:06.915032 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:43:06.914000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.923233 kernel: audit: type=1101 audit(1757464986.913:573): pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.923379 kernel: audit: type=1103 audit(1757464986.914:574): pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.923423 kernel: audit: type=1006 audit(1757464986.914:575): pid=5840 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 10 00:43:06.923698 systemd[1]: Started session-26.scope. Sep 10 00:43:06.924147 systemd-logind[1294]: New session 26 of user core. Sep 10 00:43:06.926057 kernel: audit: type=1300 audit(1757464986.914:575): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6fbdeec0 a2=3 a3=0 items=0 ppid=1 pid=5840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:06.914000 audit[5840]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6fbdeec0 a2=3 a3=0 items=0 ppid=1 pid=5840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:06.932293 kernel: audit: type=1327 audit(1757464986.914:575): proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:06.914000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:06.931000 audit[5840]: USER_START pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.938368 kernel: audit: type=1105 audit(1757464986.931:576): pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.938583 kernel: audit: type=1103 audit(1757464986.933:577): pid=5843 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:06.933000 audit[5843]: CRED_ACQ pid=5843 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:07.235377 sshd[5840]: pam_unix(sshd:session): session closed for user core Sep 10 00:43:07.236000 audit[5840]: USER_END pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:07.237873 systemd[1]: sshd@25-10.0.0.41:22-10.0.0.1:43040.service: Deactivated successfully. Sep 10 00:43:07.238939 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:43:07.240366 systemd-logind[1294]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:43:07.241407 systemd-logind[1294]: Removed session 26. Sep 10 00:43:07.246364 kernel: audit: type=1106 audit(1757464987.236:578): pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:07.246450 kernel: audit: type=1104 audit(1757464987.236:579): pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:07.236000 audit[5840]: CRED_DISP pid=5840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:07.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.41:22-10.0.0.1:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:07.901273 kubelet[2167]: E0910 00:43:07.901204 2167 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:43:12.238876 systemd[1]: Started sshd@26-10.0.0.41:22-10.0.0.1:45990.service. Sep 10 00:43:12.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.41:22-10.0.0.1:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:12.239986 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 10 00:43:12.240042 kernel: audit: type=1130 audit(1757464992.237:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.41:22-10.0.0.1:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:43:12.282589 kernel: audit: type=1101 audit(1757464992.275:582): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.275000 audit[5877]: USER_ACCT pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.279383 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:43:12.283118 sshd[5877]: Accepted publickey for core from 10.0.0.1 port 45990 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:43:12.277000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.288381 systemd-logind[1294]: New session 27 of user core. Sep 10 00:43:12.289066 systemd[1]: Started session-27.scope. Sep 10 00:43:12.289857 kernel: audit: type=1103 audit(1757464992.277:583): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.289928 kernel: audit: type=1006 audit(1757464992.277:584): pid=5877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 10 00:43:12.289966 kernel: audit: type=1300 audit(1757464992.277:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe5363150 a2=3 a3=0 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:12.277000 audit[5877]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe5363150 a2=3 a3=0 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:43:12.277000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:12.295927 kernel: audit: type=1327 audit(1757464992.277:584): proctitle=737368643A20636F7265205B707269765D Sep 10 00:43:12.296661 kernel: audit: type=1105 audit(1757464992.294:585): pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.294000 audit[5877]: USER_START pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.297000 audit[5880]: CRED_ACQ pid=5880 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.329521 kernel: audit: type=1103 audit(1757464992.297:586): pid=5880 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.433014 sshd[5877]: pam_unix(sshd:session): session closed for user core Sep 10 00:43:12.433000 audit[5877]: USER_END pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.436507 systemd[1]: sshd@26-10.0.0.41:22-10.0.0.1:45990.service: Deactivated successfully. Sep 10 00:43:12.437961 systemd-logind[1294]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:43:12.437996 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:43:12.439550 systemd-logind[1294]: Removed session 27. Sep 10 00:43:12.433000 audit[5877]: CRED_DISP pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.449378 kernel: audit: type=1106 audit(1757464992.433:587): pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.449565 kernel: audit: type=1104 audit(1757464992.433:588): pid=5877 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Sep 10 00:43:12.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.41:22-10.0.0.1:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'