Sep 6 00:20:13.890855 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:20:13.890883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:13.890896 kernel: BIOS-provided physical RAM map: Sep 6 00:20:13.890903 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:20:13.890909 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:20:13.890915 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:20:13.890923 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 6 00:20:13.890930 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 6 00:20:13.890939 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:20:13.890945 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:20:13.890952 kernel: NX (Execute Disable) protection: active Sep 6 00:20:13.890958 kernel: SMBIOS 2.8 present. Sep 6 00:20:13.890965 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 6 00:20:13.890978 kernel: Hypervisor detected: KVM Sep 6 00:20:13.890990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:20:13.891004 kernel: kvm-clock: cpu 0, msr 7219f001, primary cpu clock Sep 6 00:20:13.891014 kernel: kvm-clock: using sched offset of 3505690811 cycles Sep 6 00:20:13.891026 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:20:13.891037 kernel: tsc: Detected 2494.140 MHz processor Sep 6 00:20:13.891045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:20:13.891053 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:20:13.891060 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 6 00:20:13.891068 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:20:13.891078 kernel: ACPI: Early table checksum verification disabled Sep 6 00:20:13.891086 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 6 00:20:13.891093 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891100 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891107 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891115 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 6 00:20:13.891122 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891129 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891139 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891154 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:13.891164 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 6 00:20:13.891174 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 6 00:20:13.891187 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 6 00:20:13.891196 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 6 00:20:13.891203 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 6 00:20:13.891210 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 6 00:20:13.891218 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 6 00:20:13.891233 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:20:13.891240 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:20:13.891248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:20:13.891256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 00:20:13.891264 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 6 00:20:13.891272 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 6 00:20:13.891283 kernel: Zone ranges: Sep 6 00:20:13.891291 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:20:13.891299 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 6 00:20:13.891307 kernel: Normal empty Sep 6 00:20:13.891315 kernel: Movable zone start for each node Sep 6 00:20:13.891322 kernel: Early memory node ranges Sep 6 00:20:13.891330 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:20:13.891338 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 6 00:20:13.891346 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 6 00:20:13.891356 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:20:13.891368 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:20:13.891375 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 6 00:20:13.891383 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:20:13.891391 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:20:13.891399 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:20:13.891406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:20:13.891414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:20:13.891422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:20:13.891433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:20:13.891443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:20:13.891451 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:20:13.891458 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:20:13.891466 kernel: TSC deadline timer available Sep 6 00:20:13.891474 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:20:13.891482 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 6 00:20:13.891489 kernel: Booting paravirtualized kernel on KVM Sep 6 00:20:13.891497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:20:13.891509 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:20:13.891517 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:20:13.891524 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:20:13.891532 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:20:13.891540 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 6 00:20:13.891548 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 6 00:20:13.891556 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 6 00:20:13.891563 kernel: Policy zone: DMA32 Sep 6 00:20:13.891572 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:13.891583 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:20:13.891591 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:20:13.891599 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:20:13.891607 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:20:13.891615 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 6 00:20:13.891622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:20:13.891630 kernel: Kernel/User page tables isolation: enabled Sep 6 00:20:13.891638 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:20:13.891648 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:20:13.891656 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:20:13.891665 kernel: rcu: RCU event tracing is enabled. Sep 6 00:20:13.891673 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:20:13.891681 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:20:13.891689 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:20:13.891697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:20:13.891704 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:20:13.891725 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:20:13.891737 kernel: random: crng init done Sep 6 00:20:13.891745 kernel: Console: colour VGA+ 80x25 Sep 6 00:20:13.891753 kernel: printk: console [tty0] enabled Sep 6 00:20:13.891761 kernel: printk: console [ttyS0] enabled Sep 6 00:20:13.891769 kernel: ACPI: Core revision 20210730 Sep 6 00:20:13.891777 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:20:13.891785 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:20:13.891792 kernel: x2apic enabled Sep 6 00:20:13.891800 kernel: Switched APIC routing to physical x2apic. Sep 6 00:20:13.891808 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:20:13.891820 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:20:13.891828 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 6 00:20:13.891840 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 00:20:13.891848 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 00:20:13.891856 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:20:13.891864 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:20:13.891872 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:20:13.891880 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 00:20:13.891893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:20:13.891919 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:20:13.891928 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:20:13.891939 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:20:13.891947 kernel: active return thunk: its_return_thunk Sep 6 00:20:13.891955 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:20:13.891964 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:20:13.891972 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:20:13.891980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:20:13.891989 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:20:13.892001 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:20:13.892009 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:20:13.892018 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:20:13.892026 kernel: LSM: Security Framework initializing Sep 6 00:20:13.892034 kernel: SELinux: Initializing. Sep 6 00:20:13.892043 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:20:13.892052 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:20:13.892062 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 6 00:20:13.892071 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 6 00:20:13.892079 kernel: signal: max sigframe size: 1776 Sep 6 00:20:13.892087 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:20:13.892095 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:20:13.892103 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:20:13.892112 kernel: x86: Booting SMP configuration: Sep 6 00:20:13.892120 kernel: .... node #0, CPUs: #1 Sep 6 00:20:13.892128 kernel: kvm-clock: cpu 1, msr 7219f041, secondary cpu clock Sep 6 00:20:13.892139 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 6 00:20:13.892148 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:20:13.892156 kernel: smpboot: Max logical packages: 1 Sep 6 00:20:13.892164 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 6 00:20:13.892173 kernel: devtmpfs: initialized Sep 6 00:20:13.892181 kernel: x86/mm: Memory block size: 128MB Sep 6 00:20:13.892189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:20:13.892198 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:20:13.892206 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:20:13.892217 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:20:13.892230 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:20:13.892241 kernel: audit: type=2000 audit(1757118013.084:1): state=initialized audit_enabled=0 res=1 Sep 6 00:20:13.892252 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:20:13.892266 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:20:13.892277 kernel: cpuidle: using governor menu Sep 6 00:20:13.892293 kernel: ACPI: bus type PCI registered Sep 6 00:20:13.892307 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:20:13.892316 kernel: dca service started, version 1.12.1 Sep 6 00:20:13.892328 kernel: PCI: Using configuration type 1 for base access Sep 6 00:20:13.892337 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:20:13.892345 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:20:13.892353 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:20:13.892362 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:20:13.892370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:20:13.892390 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:20:13.892403 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:20:13.892411 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:20:13.892423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:20:13.892432 kernel: ACPI: Interpreter enabled Sep 6 00:20:13.892440 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:20:13.892449 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:20:13.892457 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:20:13.892466 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:20:13.892474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:20:13.892672 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:20:13.892783 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:20:13.892796 kernel: acpiphp: Slot [3] registered Sep 6 00:20:13.892804 kernel: acpiphp: Slot [4] registered Sep 6 00:20:13.892813 kernel: acpiphp: Slot [5] registered Sep 6 00:20:13.892821 kernel: acpiphp: Slot [6] registered Sep 6 00:20:13.892829 kernel: acpiphp: Slot [7] registered Sep 6 00:20:13.892837 kernel: acpiphp: Slot [8] registered Sep 6 00:20:13.892846 kernel: acpiphp: Slot [9] registered Sep 6 00:20:13.892854 kernel: acpiphp: Slot [10] registered Sep 6 00:20:13.892867 kernel: acpiphp: Slot [11] registered Sep 6 00:20:13.892875 kernel: acpiphp: Slot [12] registered Sep 6 00:20:13.892883 kernel: acpiphp: Slot [13] registered Sep 6 00:20:13.892892 kernel: acpiphp: Slot [14] registered Sep 6 00:20:13.892900 kernel: acpiphp: Slot [15] registered Sep 6 00:20:13.892909 kernel: acpiphp: Slot [16] registered Sep 6 00:20:13.892917 kernel: acpiphp: Slot [17] registered Sep 6 00:20:13.892926 kernel: acpiphp: Slot [18] registered Sep 6 00:20:13.892934 kernel: acpiphp: Slot [19] registered Sep 6 00:20:13.892945 kernel: acpiphp: Slot [20] registered Sep 6 00:20:13.892954 kernel: acpiphp: Slot [21] registered Sep 6 00:20:13.892963 kernel: acpiphp: Slot [22] registered Sep 6 00:20:13.892971 kernel: acpiphp: Slot [23] registered Sep 6 00:20:13.892979 kernel: acpiphp: Slot [24] registered Sep 6 00:20:13.892988 kernel: acpiphp: Slot [25] registered Sep 6 00:20:13.892996 kernel: acpiphp: Slot [26] registered Sep 6 00:20:13.893004 kernel: acpiphp: Slot [27] registered Sep 6 00:20:13.893013 kernel: acpiphp: Slot [28] registered Sep 6 00:20:13.893021 kernel: acpiphp: Slot [29] registered Sep 6 00:20:13.893033 kernel: acpiphp: Slot [30] registered Sep 6 00:20:13.893041 kernel: acpiphp: Slot [31] registered Sep 6 00:20:13.893050 kernel: PCI host bridge to bus 0000:00 Sep 6 00:20:13.893144 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:20:13.893227 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:20:13.893307 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:20:13.893388 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:20:13.893471 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 6 00:20:13.893549 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:20:13.893675 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:20:13.897919 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:20:13.898048 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 6 00:20:13.898141 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 6 00:20:13.898237 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 6 00:20:13.898324 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 6 00:20:13.898411 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 6 00:20:13.898498 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 6 00:20:13.898598 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 6 00:20:13.898686 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 6 00:20:13.898808 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:20:13.898900 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 6 00:20:13.898988 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 6 00:20:13.899087 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 6 00:20:13.899177 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 6 00:20:13.899269 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 6 00:20:13.899361 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 6 00:20:13.899452 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 6 00:20:13.899545 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:20:13.899646 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:13.899812 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 6 00:20:13.900008 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 6 00:20:13.900133 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 6 00:20:13.903966 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:13.904090 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 6 00:20:13.904181 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 6 00:20:13.904269 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 6 00:20:13.904369 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 6 00:20:13.904480 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 6 00:20:13.904568 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 6 00:20:13.904654 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 6 00:20:13.904766 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:13.904895 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:20:13.908912 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 6 00:20:13.909032 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 6 00:20:13.909141 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:13.909234 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 6 00:20:13.909323 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 6 00:20:13.909416 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 6 00:20:13.909524 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 6 00:20:13.909613 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 6 00:20:13.909700 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 6 00:20:13.909720 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:20:13.909729 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:20:13.909738 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:20:13.909749 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:20:13.909758 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:20:13.909767 kernel: iommu: Default domain type: Translated Sep 6 00:20:13.909776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:20:13.909865 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 6 00:20:13.909952 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:20:13.910040 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 6 00:20:13.910051 kernel: vgaarb: loaded Sep 6 00:20:13.910060 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:20:13.910072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:20:13.910081 kernel: PTP clock support registered Sep 6 00:20:13.910089 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:20:13.910098 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:20:13.910107 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:20:13.910116 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 6 00:20:13.910124 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:20:13.910133 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:20:13.910141 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:20:13.910152 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:20:13.910161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:20:13.910170 kernel: pnp: PnP ACPI init Sep 6 00:20:13.910181 kernel: pnp: PnP ACPI: found 4 devices Sep 6 00:20:13.910193 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:20:13.910206 kernel: NET: Registered PF_INET protocol family Sep 6 00:20:13.910215 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:20:13.910224 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:20:13.910235 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:20:13.910244 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:20:13.910252 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:20:13.910261 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:20:13.910269 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:20:13.910278 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:20:13.910286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:20:13.910295 kernel: NET: Registered PF_XDP protocol family Sep 6 00:20:13.910389 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:20:13.910476 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:20:13.910573 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:20:13.910653 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:20:13.910751 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 6 00:20:13.910846 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 6 00:20:13.910939 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:20:13.911029 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:20:13.911041 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:20:13.911136 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 32025 usecs Sep 6 00:20:13.911147 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:20:13.911157 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:20:13.911166 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:20:13.911175 kernel: Initialise system trusted keyrings Sep 6 00:20:13.911183 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:20:13.911192 kernel: Key type asymmetric registered Sep 6 00:20:13.911201 kernel: Asymmetric key parser 'x509' registered Sep 6 00:20:13.911209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:20:13.911221 kernel: io scheduler mq-deadline registered Sep 6 00:20:13.911229 kernel: io scheduler kyber registered Sep 6 00:20:13.911237 kernel: io scheduler bfq registered Sep 6 00:20:13.911245 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:20:13.911254 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 6 00:20:13.911263 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:20:13.911271 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:20:13.911280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:20:13.911288 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:20:13.911300 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:20:13.911309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:20:13.911317 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:20:13.911326 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:20:13.911477 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 00:20:13.911564 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 00:20:13.911657 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T00:20:13 UTC (1757118013) Sep 6 00:20:13.911750 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 00:20:13.911765 kernel: intel_pstate: CPU model not supported Sep 6 00:20:13.911773 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:20:13.911782 kernel: Segment Routing with IPv6 Sep 6 00:20:13.911790 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:20:13.911799 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:20:13.911807 kernel: Key type dns_resolver registered Sep 6 00:20:13.911816 kernel: IPI shorthand broadcast: enabled Sep 6 00:20:13.911825 kernel: sched_clock: Marking stable (619307500, 88145176)->(814777642, -107324966) Sep 6 00:20:13.911833 kernel: registered taskstats version 1 Sep 6 00:20:13.911844 kernel: Loading compiled-in X.509 certificates Sep 6 00:20:13.911853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:20:13.911861 kernel: Key type .fscrypt registered Sep 6 00:20:13.911870 kernel: Key type fscrypt-provisioning registered Sep 6 00:20:13.911878 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:20:13.911887 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:20:13.911895 kernel: ima: No architecture policies found Sep 6 00:20:13.911903 kernel: clk: Disabling unused clocks Sep 6 00:20:13.911914 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:20:13.911923 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:20:13.911931 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:20:13.911940 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:20:13.911948 kernel: Run /init as init process Sep 6 00:20:13.911957 kernel: with arguments: Sep 6 00:20:13.911988 kernel: /init Sep 6 00:20:13.912000 kernel: with environment: Sep 6 00:20:13.912008 kernel: HOME=/ Sep 6 00:20:13.912017 kernel: TERM=linux Sep 6 00:20:13.912029 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:20:13.912041 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:13.912053 systemd[1]: Detected virtualization kvm. Sep 6 00:20:13.912063 systemd[1]: Detected architecture x86-64. Sep 6 00:20:13.912072 systemd[1]: Running in initrd. Sep 6 00:20:13.912081 systemd[1]: No hostname configured, using default hostname. Sep 6 00:20:13.912090 systemd[1]: Hostname set to . Sep 6 00:20:13.912102 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:13.912112 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:20:13.912121 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:13.912130 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:13.912139 systemd[1]: Reached target paths.target. Sep 6 00:20:13.912148 systemd[1]: Reached target slices.target. Sep 6 00:20:13.912157 systemd[1]: Reached target swap.target. Sep 6 00:20:13.912166 systemd[1]: Reached target timers.target. Sep 6 00:20:13.912178 systemd[1]: Listening on iscsid.socket. Sep 6 00:20:13.912188 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:20:13.912197 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:13.912206 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:13.912215 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:13.912225 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:13.912234 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:13.912243 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:13.912257 systemd[1]: Reached target sockets.target. Sep 6 00:20:13.912267 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:13.912279 systemd[1]: Finished network-cleanup.service. Sep 6 00:20:13.912289 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:20:13.912298 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:13.912307 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:13.912319 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:13.912328 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:20:13.912337 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:13.912346 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:20:13.912355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:13.912369 systemd-journald[185]: Journal started Sep 6 00:20:13.912485 systemd-journald[185]: Runtime Journal (/run/log/journal/ee51b3a35b774876a7d1d83dd1afc902) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:20:13.908914 systemd-modules-load[186]: Inserted module 'overlay' Sep 6 00:20:13.917050 systemd-resolved[187]: Positive Trust Anchors: Sep 6 00:20:13.917062 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:13.936096 systemd[1]: Started systemd-journald.service. Sep 6 00:20:13.917095 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:13.944362 kernel: audit: type=1130 audit(1757118013.935:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.919818 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 6 00:20:13.936509 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:13.952639 kernel: audit: type=1130 audit(1757118013.940:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.952668 kernel: audit: type=1130 audit(1757118013.941:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.952681 kernel: audit: type=1130 audit(1757118013.941:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.941877 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:20:13.942312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:13.942701 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:13.943785 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:20:13.957077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:20:13.962923 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 6 00:20:13.963761 kernel: Bridge firewalling registered Sep 6 00:20:13.984439 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:20:13.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.987799 kernel: audit: type=1130 audit(1757118013.984:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:13.986054 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:20:13.990397 kernel: SCSI subsystem initialized Sep 6 00:20:13.999790 dracut-cmdline[202]: dracut-dracut-053 Sep 6 00:20:14.003205 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:14.009156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:20:14.009227 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:20:14.009243 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:20:14.014516 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 6 00:20:14.015488 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:14.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.026773 kernel: audit: type=1130 audit(1757118014.021:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.026393 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:14.035541 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:14.039162 kernel: audit: type=1130 audit(1757118014.035:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.094752 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:20:14.113743 kernel: iscsi: registered transport (tcp) Sep 6 00:20:14.138754 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:20:14.138837 kernel: QLogic iSCSI HBA Driver Sep 6 00:20:14.185757 kernel: audit: type=1130 audit(1757118014.182:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.182776 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:20:14.184321 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:20:14.239792 kernel: raid6: avx2x4 gen() 17317 MB/s Sep 6 00:20:14.256769 kernel: raid6: avx2x4 xor() 9027 MB/s Sep 6 00:20:14.273777 kernel: raid6: avx2x2 gen() 17515 MB/s Sep 6 00:20:14.290781 kernel: raid6: avx2x2 xor() 20498 MB/s Sep 6 00:20:14.307770 kernel: raid6: avx2x1 gen() 13267 MB/s Sep 6 00:20:14.324779 kernel: raid6: avx2x1 xor() 17078 MB/s Sep 6 00:20:14.341772 kernel: raid6: sse2x4 gen() 12493 MB/s Sep 6 00:20:14.358775 kernel: raid6: sse2x4 xor() 6753 MB/s Sep 6 00:20:14.376493 kernel: raid6: sse2x2 gen() 12168 MB/s Sep 6 00:20:14.392797 kernel: raid6: sse2x2 xor() 8454 MB/s Sep 6 00:20:14.409794 kernel: raid6: sse2x1 gen() 11895 MB/s Sep 6 00:20:14.427012 kernel: raid6: sse2x1 xor() 5906 MB/s Sep 6 00:20:14.427119 kernel: raid6: using algorithm avx2x2 gen() 17515 MB/s Sep 6 00:20:14.427132 kernel: raid6: .... xor() 20498 MB/s, rmw enabled Sep 6 00:20:14.428219 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:20:14.442767 kernel: xor: automatically using best checksumming function avx Sep 6 00:20:14.549784 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:20:14.561738 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:20:14.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.563190 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:14.561000 audit: BPF prog-id=7 op=LOAD Sep 6 00:20:14.561000 audit: BPF prog-id=8 op=LOAD Sep 6 00:20:14.566745 kernel: audit: type=1130 audit(1757118014.561:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.581149 systemd-udevd[385]: Using default interface naming scheme 'v252'. Sep 6 00:20:14.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.587683 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:14.589555 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:20:14.610812 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Sep 6 00:20:14.647052 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:20:14.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.648354 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:14.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:14.697034 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:14.759996 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 00:20:14.810304 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:20:14.810332 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:20:14.810477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:20:14.810491 kernel: GPT:9289727 != 125829119 Sep 6 00:20:14.810502 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:20:14.810513 kernel: GPT:9289727 != 125829119 Sep 6 00:20:14.810524 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:20:14.810535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:14.812188 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 6 00:20:14.821981 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:20:14.822033 kernel: AES CTR mode by8 optimization enabled Sep 6 00:20:14.836920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:20:14.844800 kernel: libata version 3.00 loaded. Sep 6 00:20:14.841931 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:20:14.860311 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:20:14.860839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:20:14.862818 systemd[1]: Starting disk-uuid.service... Sep 6 00:20:14.870518 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (438) Sep 6 00:20:14.870651 disk-uuid[461]: Primary Header is updated. Sep 6 00:20:14.870651 disk-uuid[461]: Secondary Entries is updated. Sep 6 00:20:14.870651 disk-uuid[461]: Secondary Header is updated. Sep 6 00:20:14.873739 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 6 00:20:14.894353 kernel: ACPI: bus type USB registered Sep 6 00:20:14.894391 kernel: scsi host1: ata_piix Sep 6 00:20:14.894752 kernel: usbcore: registered new interface driver usbfs Sep 6 00:20:14.894774 kernel: usbcore: registered new interface driver hub Sep 6 00:20:14.894790 kernel: usbcore: registered new device driver usb Sep 6 00:20:14.894805 kernel: scsi host2: ata_piix Sep 6 00:20:14.894966 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 6 00:20:14.894978 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 6 00:20:14.902738 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 6 00:20:14.905764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:15.060748 kernel: ehci-pci: EHCI PCI platform driver Sep 6 00:20:15.067754 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 6 00:20:15.085785 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 6 00:20:15.089240 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 6 00:20:15.089446 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 6 00:20:15.089629 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 6 00:20:15.089823 kernel: hub 1-0:1.0: USB hub found Sep 6 00:20:15.090047 kernel: hub 1-0:1.0: 2 ports detected Sep 6 00:20:15.881599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:15.881668 disk-uuid[463]: The operation has completed successfully. Sep 6 00:20:15.928366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:20:15.928524 systemd[1]: Finished disk-uuid.service. Sep 6 00:20:15.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:15.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:15.930314 systemd[1]: Starting verity-setup.service... Sep 6 00:20:15.950762 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:20:16.007401 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:20:16.009975 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:20:16.013153 systemd[1]: Finished verity-setup.service. Sep 6 00:20:16.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.121750 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:20:16.122486 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:20:16.123158 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:20:16.124263 systemd[1]: Starting ignition-setup.service... Sep 6 00:20:16.125905 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:20:16.142937 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:16.143019 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:16.143039 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:16.162868 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:20:16.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.171161 systemd[1]: Finished ignition-setup.service. Sep 6 00:20:16.172955 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:20:16.325764 ignition[605]: Ignition 2.14.0 Sep 6 00:20:16.325782 ignition[605]: Stage: fetch-offline Sep 6 00:20:16.325868 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:16.325906 ignition[605]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:16.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.330000 audit: BPF prog-id=9 op=LOAD Sep 6 00:20:16.329855 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:20:16.332329 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:16.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.334384 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:16.340429 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:20:16.334552 ignition[605]: parsed url from cmdline: "" Sep 6 00:20:16.334558 ignition[605]: no config URL provided Sep 6 00:20:16.334567 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:16.334581 ignition[605]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:16.334590 ignition[605]: failed to fetch config: resource requires networking Sep 6 00:20:16.334896 ignition[605]: Ignition finished successfully Sep 6 00:20:16.369002 systemd-networkd[688]: lo: Link UP Sep 6 00:20:16.369017 systemd-networkd[688]: lo: Gained carrier Sep 6 00:20:16.370705 systemd-networkd[688]: Enumeration completed Sep 6 00:20:16.370883 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:16.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.371805 systemd[1]: Reached target network.target. Sep 6 00:20:16.373856 systemd[1]: Starting ignition-fetch.service... Sep 6 00:20:16.375554 systemd[1]: Starting iscsiuio.service... Sep 6 00:20:16.377216 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:16.391220 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 6 00:20:16.393373 systemd-networkd[688]: eth1: Link UP Sep 6 00:20:16.393387 systemd-networkd[688]: eth1: Gained carrier Sep 6 00:20:16.399587 ignition[690]: Ignition 2.14.0 Sep 6 00:20:16.399433 systemd[1]: Started iscsiuio.service. Sep 6 00:20:16.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.399595 ignition[690]: Stage: fetch Sep 6 00:20:16.403571 systemd[1]: Starting iscsid.service... Sep 6 00:20:16.399770 ignition[690]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:16.404271 systemd-networkd[688]: eth0: Link UP Sep 6 00:20:16.399790 ignition[690]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:16.412438 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:16.412438 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:20:16.412438 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:20:16.412438 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:20:16.412438 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:16.412438 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:20:16.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.404276 systemd-networkd[688]: eth0: Gained carrier Sep 6 00:20:16.404272 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:16.414532 systemd[1]: Started iscsid.service. Sep 6 00:20:16.404567 ignition[690]: parsed url from cmdline: "" Sep 6 00:20:16.416858 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:20:16.404588 ignition[690]: no config URL provided Sep 6 00:20:16.417898 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Sep 6 00:20:16.404597 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:16.421906 systemd-networkd[688]: eth0: DHCPv4 address 64.227.108.127/20, gateway 64.227.96.1 acquired from 169.254.169.253 Sep 6 00:20:16.404611 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:16.404656 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 6 00:20:16.412133 ignition[690]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 6 00:20:16.436668 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:20:16.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.437213 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:20:16.437858 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:16.438760 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:16.440462 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:20:16.450540 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:20:16.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.612620 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Sep 6 00:20:16.626774 ignition[690]: GET result: OK Sep 6 00:20:16.626946 ignition[690]: parsing config with SHA512: 372e2bb592cbf2f058e1c0a1946654dcec65a6a862dc00dcf86fd2b821fcb545159ebc0b3e3e69473ae979be17c55f2ef8cfd8d77fb1760317045ef5675f901b Sep 6 00:20:16.635451 unknown[690]: fetched base config from "system" Sep 6 00:20:16.635463 unknown[690]: fetched base config from "system" Sep 6 00:20:16.635962 ignition[690]: fetch: fetch complete Sep 6 00:20:16.635470 unknown[690]: fetched user config from "digitalocean" Sep 6 00:20:16.635967 ignition[690]: fetch: fetch passed Sep 6 00:20:16.637389 systemd[1]: Finished ignition-fetch.service. Sep 6 00:20:16.636011 ignition[690]: Ignition finished successfully Sep 6 00:20:16.639159 systemd[1]: Starting ignition-kargs.service... Sep 6 00:20:16.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.653230 ignition[713]: Ignition 2.14.0 Sep 6 00:20:16.653240 ignition[713]: Stage: kargs Sep 6 00:20:16.653405 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:16.653427 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:16.655678 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:16.659371 ignition[713]: kargs: kargs passed Sep 6 00:20:16.659443 ignition[713]: Ignition finished successfully Sep 6 00:20:16.660663 systemd[1]: Finished ignition-kargs.service. Sep 6 00:20:16.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.662562 systemd[1]: Starting ignition-disks.service... Sep 6 00:20:16.671407 ignition[719]: Ignition 2.14.0 Sep 6 00:20:16.671420 ignition[719]: Stage: disks Sep 6 00:20:16.671553 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:16.671573 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:16.673480 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:16.675165 ignition[719]: disks: disks passed Sep 6 00:20:16.675229 ignition[719]: Ignition finished successfully Sep 6 00:20:16.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.676101 systemd[1]: Finished ignition-disks.service. Sep 6 00:20:16.676703 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:20:16.677137 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:16.677691 systemd[1]: Reached target local-fs.target. Sep 6 00:20:16.678269 systemd[1]: Reached target sysinit.target. Sep 6 00:20:16.678810 systemd[1]: Reached target basic.target. Sep 6 00:20:16.680791 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:20:16.696423 systemd-fsck[727]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:20:16.699911 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:20:16.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.701368 systemd[1]: Mounting sysroot.mount... Sep 6 00:20:16.716751 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:20:16.717182 systemd[1]: Mounted sysroot.mount. Sep 6 00:20:16.717642 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:20:16.719421 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:20:16.720783 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 6 00:20:16.722573 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:20:16.722987 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:20:16.723027 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:20:16.727574 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:20:16.729903 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:20:16.737024 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:20:16.751142 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:20:16.763335 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:20:16.774754 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:20:16.848397 coreos-metadata[733]: Sep 06 00:20:16.848 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:16.852162 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:20:16.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.853950 systemd[1]: Starting ignition-mount.service... Sep 6 00:20:16.855597 systemd[1]: Starting sysroot-boot.service... Sep 6 00:20:16.863304 coreos-metadata[733]: Sep 06 00:20:16.863 INFO Fetch successful Sep 6 00:20:16.867589 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:20:16.874555 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 6 00:20:16.874657 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 6 00:20:16.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.884020 ignition[786]: INFO : Ignition 2.14.0 Sep 6 00:20:16.884735 ignition[786]: INFO : Stage: mount Sep 6 00:20:16.885266 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:16.885837 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:16.888489 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:16.890887 ignition[786]: INFO : mount: mount passed Sep 6 00:20:16.891400 ignition[786]: INFO : Ignition finished successfully Sep 6 00:20:16.891915 coreos-metadata[734]: Sep 06 00:20:16.891 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:16.893547 systemd[1]: Finished ignition-mount.service. Sep 6 00:20:16.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.895535 systemd[1]: Finished sysroot-boot.service. Sep 6 00:20:16.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:16.902952 coreos-metadata[734]: Sep 06 00:20:16.902 INFO Fetch successful Sep 6 00:20:16.907308 coreos-metadata[734]: Sep 06 00:20:16.907 INFO wrote hostname ci-3510.3.8-n-0d6cc4df9c to /sysroot/etc/hostname Sep 6 00:20:16.908271 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:20:16.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:17.033745 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:20:17.042749 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (793) Sep 6 00:20:17.052972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:17.053045 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:17.053070 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:17.057573 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:20:17.058887 systemd[1]: Starting ignition-files.service... Sep 6 00:20:17.083580 ignition[813]: INFO : Ignition 2.14.0 Sep 6 00:20:17.083580 ignition[813]: INFO : Stage: files Sep 6 00:20:17.084680 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:17.084680 ignition[813]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:17.085838 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:17.091387 ignition[813]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:20:17.092017 ignition[813]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:20:17.092017 ignition[813]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:20:17.094907 ignition[813]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:20:17.095621 ignition[813]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:20:17.097136 unknown[813]: wrote ssh authorized keys file for user: core Sep 6 00:20:17.097755 ignition[813]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:20:17.098443 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:20:17.099350 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:20:17.099350 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:20:17.099350 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:17.158864 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:20:17.349470 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:20:17.349470 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:17.351367 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:20:17.630062 systemd-networkd[688]: eth1: Gained IPv6LL Sep 6 00:20:17.758006 systemd-networkd[688]: eth0: Gained IPv6LL Sep 6 00:20:17.787789 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:20:18.173520 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:20:18.174592 ignition[813]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:20:18.175150 ignition[813]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:20:18.175657 ignition[813]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 6 00:20:18.176786 ignition[813]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:20:18.177775 ignition[813]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:20:18.178510 ignition[813]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 6 00:20:18.178998 ignition[813]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 6 00:20:18.179505 ignition[813]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:18.181862 ignition[813]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:18.186998 ignition[813]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:18.187840 ignition[813]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:18.188476 ignition[813]: INFO : files: files passed Sep 6 00:20:18.188933 ignition[813]: INFO : Ignition finished successfully Sep 6 00:20:18.190920 systemd[1]: Finished ignition-files.service. Sep 6 00:20:18.195874 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:20:18.195901 kernel: audit: type=1130 audit(1757118018.190:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.193136 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:20:18.196894 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:20:18.198792 systemd[1]: Starting ignition-quench.service... Sep 6 00:20:18.201884 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:20:18.202582 systemd[1]: Finished ignition-quench.service. Sep 6 00:20:18.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.207011 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:20:18.209089 kernel: audit: type=1130 audit(1757118018.203:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.209117 kernel: audit: type=1131 audit(1757118018.203:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.209411 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:20:18.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.210470 systemd[1]: Reached target ignition-complete.target. Sep 6 00:20:18.213761 kernel: audit: type=1130 audit(1757118018.209:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.214465 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:20:18.234419 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:20:18.235337 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:20:18.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.236887 systemd[1]: Reached target initrd-fs.target. Sep 6 00:20:18.242529 kernel: audit: type=1130 audit(1757118018.235:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.242584 kernel: audit: type=1131 audit(1757118018.235:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.243314 systemd[1]: Reached target initrd.target. Sep 6 00:20:18.244531 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:20:18.247226 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:20:18.261509 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:20:18.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.263295 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:20:18.267344 kernel: audit: type=1130 audit(1757118018.261:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.274828 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:20:18.275281 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:20:18.276095 systemd[1]: Stopped target timers.target. Sep 6 00:20:18.283292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:20:18.288132 kernel: audit: type=1131 audit(1757118018.283:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.283429 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:20:18.284283 systemd[1]: Stopped target initrd.target. Sep 6 00:20:18.288585 systemd[1]: Stopped target basic.target. Sep 6 00:20:18.290367 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:20:18.290817 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:20:18.291630 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:20:18.292603 systemd[1]: Stopped target remote-fs.target. Sep 6 00:20:18.293231 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:20:18.293989 systemd[1]: Stopped target sysinit.target. Sep 6 00:20:18.294630 systemd[1]: Stopped target local-fs.target. Sep 6 00:20:18.295299 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:20:18.295967 systemd[1]: Stopped target swap.target. Sep 6 00:20:18.296684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:20:18.301176 kernel: audit: type=1131 audit(1757118018.296:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.296858 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:20:18.297599 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:20:18.305981 kernel: audit: type=1131 audit(1757118018.301:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.301633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:20:18.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.301831 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:20:18.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.302395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:20:18.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.302532 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:20:18.306478 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:20:18.306631 systemd[1]: Stopped ignition-files.service. Sep 6 00:20:18.307191 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:20:18.311163 iscsid[698]: iscsid shutting down. Sep 6 00:20:18.307357 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:20:18.309371 systemd[1]: Stopping ignition-mount.service... Sep 6 00:20:18.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.315012 systemd[1]: Stopping iscsid.service... Sep 6 00:20:18.316993 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:20:18.317518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:20:18.317829 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:20:18.318375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:20:18.318502 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:20:18.321178 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:20:18.321335 systemd[1]: Stopped iscsid.service. Sep 6 00:20:18.322972 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:20:18.323075 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:20:18.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.334555 ignition[851]: INFO : Ignition 2.14.0 Sep 6 00:20:18.334555 ignition[851]: INFO : Stage: umount Sep 6 00:20:18.334555 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:18.339162 ignition[851]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:18.336118 systemd[1]: Stopping iscsiuio.service... Sep 6 00:20:18.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.343132 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:18.339644 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:20:18.339813 systemd[1]: Stopped iscsiuio.service. Sep 6 00:20:18.346117 ignition[851]: INFO : umount: umount passed Sep 6 00:20:18.346117 ignition[851]: INFO : Ignition finished successfully Sep 6 00:20:18.346743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:20:18.349053 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:20:18.349154 systemd[1]: Stopped ignition-mount.service. Sep 6 00:20:18.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.350131 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:20:18.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.350183 systemd[1]: Stopped ignition-disks.service. Sep 6 00:20:18.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.350846 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:20:18.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.350910 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:20:18.351683 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:20:18.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.351844 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:20:18.352504 systemd[1]: Stopped target network.target. Sep 6 00:20:18.353312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:20:18.353370 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:20:18.354075 systemd[1]: Stopped target paths.target. Sep 6 00:20:18.354732 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:20:18.358813 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:20:18.359254 systemd[1]: Stopped target slices.target. Sep 6 00:20:18.359962 systemd[1]: Stopped target sockets.target. Sep 6 00:20:18.360815 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:20:18.360866 systemd[1]: Closed iscsid.socket. Sep 6 00:20:18.361484 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:20:18.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.361531 systemd[1]: Closed iscsiuio.socket. Sep 6 00:20:18.362159 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:20:18.362206 systemd[1]: Stopped ignition-setup.service. Sep 6 00:20:18.363305 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:20:18.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.363978 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:20:18.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.365090 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:20:18.365207 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:20:18.366177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:20:18.366238 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:20:18.366804 systemd-networkd[688]: eth0: DHCPv6 lease lost Sep 6 00:20:18.370830 systemd-networkd[688]: eth1: DHCPv6 lease lost Sep 6 00:20:18.373499 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:20:18.373658 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:20:18.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.375269 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:20:18.375413 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:20:18.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.376000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:20:18.377059 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:20:18.377097 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:20:18.378300 systemd[1]: Stopping network-cleanup.service... Sep 6 00:20:18.378000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:20:18.378844 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:20:18.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.378909 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:20:18.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.379601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:20:18.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.379660 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:20:18.382818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:20:18.382887 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:20:18.388220 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:20:18.390456 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:20:18.394350 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:20:18.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.394465 systemd[1]: Stopped network-cleanup.service. Sep 6 00:20:18.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.396201 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:20:18.396392 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:20:18.397072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:20:18.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.397113 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:20:18.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.397513 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:20:18.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.397541 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:20:18.398421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:20:18.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.398497 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:20:18.399182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:20:18.399225 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:20:18.408802 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:20:18.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:18.408897 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:20:18.410575 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:20:18.413054 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:20:18.413147 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:20:18.413810 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:20:18.413872 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:20:18.414327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:20:18.414375 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:20:18.416536 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:20:18.419121 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:20:18.419214 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:20:18.420637 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:20:18.421898 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:20:18.431241 systemd[1]: Switching root. Sep 6 00:20:18.433000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:20:18.433000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:20:18.433000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:20:18.433000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:20:18.433000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:20:18.451961 systemd-journald[185]: Journal stopped Sep 6 00:20:21.883587 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 6 00:20:21.883661 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:20:21.883677 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:20:21.883689 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:20:21.883700 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:20:21.883726 kernel: SELinux: policy capability open_perms=1 Sep 6 00:20:21.883738 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:20:21.883751 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:20:21.883765 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:20:21.883776 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:20:21.883788 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:20:21.883799 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:20:21.883811 systemd[1]: Successfully loaded SELinux policy in 42.330ms. Sep 6 00:20:21.883840 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.448ms. Sep 6 00:20:21.883854 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:21.883885 systemd[1]: Detected virtualization kvm. Sep 6 00:20:21.883901 systemd[1]: Detected architecture x86-64. Sep 6 00:20:21.883922 systemd[1]: Detected first boot. Sep 6 00:20:21.883936 systemd[1]: Hostname set to . Sep 6 00:20:21.883950 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:21.883962 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:20:21.883990 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:20:21.884003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:21.884016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:21.884032 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:21.884045 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:20:21.884057 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:20:21.884071 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:20:21.884092 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:20:21.884109 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:20:21.884133 systemd[1]: Created slice system-getty.slice. Sep 6 00:20:21.884150 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:20:21.884177 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:20:21.884196 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:20:21.884213 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:20:21.884231 systemd[1]: Created slice user.slice. Sep 6 00:20:21.884248 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:21.884265 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:20:21.884297 systemd[1]: Set up automount boot.automount. Sep 6 00:20:21.884315 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:20:21.884339 systemd[1]: Reached target integritysetup.target. Sep 6 00:20:21.884353 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:21.884377 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:21.884390 systemd[1]: Reached target slices.target. Sep 6 00:20:21.884402 systemd[1]: Reached target swap.target. Sep 6 00:20:21.884420 systemd[1]: Reached target torcx.target. Sep 6 00:20:21.884432 systemd[1]: Reached target veritysetup.target. Sep 6 00:20:21.884444 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:20:21.884459 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:20:21.884471 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:21.884483 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:21.884496 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:21.884508 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:21.884520 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:21.884533 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:21.884545 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:20:21.884562 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:20:21.884575 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:20:21.884591 systemd[1]: Mounting media.mount... Sep 6 00:20:21.884603 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:21.884616 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:20:21.884628 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:20:21.884640 systemd[1]: Mounting tmp.mount... Sep 6 00:20:21.884658 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:20:21.884671 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:21.884683 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:21.884695 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:20:21.895307 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:21.895370 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:21.895391 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:21.895410 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:20:21.895427 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:21.895446 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:20:21.895465 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:20:21.895481 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:20:21.895502 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:21.895514 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:21.895527 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:20:21.895540 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:20:21.895552 kernel: fuse: init (API version 7.34) Sep 6 00:20:21.895565 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:21.895577 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:21.895589 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:20:21.895601 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:20:21.895616 kernel: loop: module loaded Sep 6 00:20:21.895629 systemd[1]: Mounted media.mount. Sep 6 00:20:21.895641 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:20:21.895654 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:20:21.895667 systemd[1]: Mounted tmp.mount. Sep 6 00:20:21.895680 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:21.895692 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:20:21.895704 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:20:21.895729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:21.895742 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:21.895758 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:21.895770 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:21.895783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:21.895795 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:21.895813 systemd-journald[989]: Journal started Sep 6 00:20:21.895878 systemd-journald[989]: Runtime Journal (/run/log/journal/ee51b3a35b774876a7d1d83dd1afc902) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:20:21.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.877000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:20:21.877000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe1678caf0 a2=4000 a3=7ffe1678cb8c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:21.877000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:20:21.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.900435 systemd[1]: Started systemd-journald.service. Sep 6 00:20:21.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.898847 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:20:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.899014 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:20:21.899596 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:21.899816 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:21.904861 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:21.910174 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:20:21.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.914473 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:20:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.915415 systemd[1]: Reached target network-pre.target. Sep 6 00:20:21.917111 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:20:21.923059 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:20:21.923482 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:20:21.925843 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:20:21.927928 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:20:21.928859 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:21.930300 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:20:21.930744 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:21.933097 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:21.949253 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:20:21.949769 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:20:21.952245 systemd-journald[989]: Time spent on flushing to /var/log/journal/ee51b3a35b774876a7d1d83dd1afc902 is 45.323ms for 1086 entries. Sep 6 00:20:21.952245 systemd-journald[989]: System Journal (/var/log/journal/ee51b3a35b774876a7d1d83dd1afc902) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:20:22.005840 systemd-journald[989]: Received client request to flush runtime journal. Sep 6 00:20:21.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:21.965214 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:20:21.965792 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:20:21.992264 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:22.009180 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:20:22.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.012944 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:20:22.015897 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:20:22.043707 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:22.045665 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:20:22.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.053366 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:20:22.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.055253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:22.068409 udevadm[1043]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:20:22.089124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:22.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.673759 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:20:22.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.675591 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:22.700828 systemd-udevd[1049]: Using default interface naming scheme 'v252'. Sep 6 00:20:22.726823 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:22.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.729340 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:22.740929 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:20:22.796967 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:20:22.827033 systemd[1]: Started systemd-userdbd.service. Sep 6 00:20:22.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.847076 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:22.847324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:22.848815 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:22.852223 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:22.853974 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:22.855247 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:20:22.855346 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:20:22.855479 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:22.856101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:22.856312 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:22.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.858667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:22.858919 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:22.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.862633 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:22.862959 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:22.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.863655 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:22.864487 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:22.964625 systemd-networkd[1050]: lo: Link UP Sep 6 00:20:22.964636 systemd-networkd[1050]: lo: Gained carrier Sep 6 00:20:22.965257 systemd-networkd[1050]: Enumeration completed Sep 6 00:20:22.965410 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:22.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:22.965935 systemd-networkd[1050]: eth1: Configuring with /run/systemd/network/10-7e:56:47:d0:a8:6c.network. Sep 6 00:20:22.967143 systemd-networkd[1050]: eth0: Configuring with /run/systemd/network/10-be:1d:ad:b0:e1:c6.network. Sep 6 00:20:22.967856 systemd-networkd[1050]: eth1: Link UP Sep 6 00:20:22.967864 systemd-networkd[1050]: eth1: Gained carrier Sep 6 00:20:22.971012 systemd-networkd[1050]: eth0: Link UP Sep 6 00:20:22.971022 systemd-networkd[1050]: eth0: Gained carrier Sep 6 00:20:22.974668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:22.984786 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:20:23.000782 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:20:23.003000 audit[1055]: AVC avc: denied { confidentiality } for pid=1055 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:20:23.003000 audit[1055]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=565026b36010 a1=338ec a2=7f3f9d07cbc5 a3=5 items=110 ppid=1049 pid=1055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:23.003000 audit: CWD cwd="/" Sep 6 00:20:23.003000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=1 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=2 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=3 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=4 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=5 name=(null) inode=14242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=6 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=7 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=8 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=9 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=10 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=11 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=12 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=13 name=(null) inode=14246 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=14 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=15 name=(null) inode=14247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=16 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=17 name=(null) inode=14248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=18 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=19 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=20 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=21 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=22 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=23 name=(null) inode=14251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=24 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=25 name=(null) inode=14252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=26 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=27 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=28 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=29 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=30 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=31 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=32 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=33 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=34 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=35 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=36 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=37 name=(null) inode=14258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=38 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=39 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=40 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=41 name=(null) inode=14260 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=42 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=43 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=44 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=45 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=46 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=47 name=(null) inode=14263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=48 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=49 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=50 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=51 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=52 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=53 name=(null) inode=14266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=55 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=56 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=57 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=58 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=59 name=(null) inode=14269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=60 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=61 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=62 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=63 name=(null) inode=14271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=64 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=65 name=(null) inode=14272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=66 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=67 name=(null) inode=14273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=68 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=69 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=70 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=71 name=(null) inode=14275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=72 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=73 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=74 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=75 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=76 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=77 name=(null) inode=14278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=78 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=79 name=(null) inode=14279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=80 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=81 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=82 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=83 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=84 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=85 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=86 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=87 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=88 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=89 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=90 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=91 name=(null) inode=14285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=92 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=93 name=(null) inode=14286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=94 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=95 name=(null) inode=14287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=96 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=97 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=98 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=99 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=100 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=101 name=(null) inode=14290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=102 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=103 name=(null) inode=14291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=104 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=105 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=106 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=107 name=(null) inode=14293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PATH item=109 name=(null) inode=14294 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:23.003000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:20:23.041736 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:20:23.061746 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 6 00:20:23.067470 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:20:23.182921 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:20:23.205323 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:20:23.210463 kernel: kauditd_printk_skb: 203 callbacks suppressed Sep 6 00:20:23.210576 kernel: audit: type=1130 audit(1757118023.205:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.207344 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:20:23.232576 lvm[1092]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:23.263387 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:20:23.264081 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:23.266402 systemd[1]: Starting lvm2-activation.service... Sep 6 00:20:23.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.270886 kernel: audit: type=1130 audit(1757118023.263:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.277034 lvm[1094]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:23.304556 systemd[1]: Finished lvm2-activation.service. Sep 6 00:20:23.305089 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:23.307203 systemd[1]: Mounting media-configdrive.mount... Sep 6 00:20:23.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.310008 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:20:23.310078 systemd[1]: Reached target machines.target. Sep 6 00:20:23.310747 kernel: audit: type=1130 audit(1757118023.304:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.312079 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:20:23.325916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:20:23.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.329740 kernel: audit: type=1130 audit(1757118023.325:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.331739 kernel: ISO 9660 Extensions: RRIP_1991A Sep 6 00:20:23.334336 systemd[1]: Mounted media-configdrive.mount. Sep 6 00:20:23.334928 systemd[1]: Reached target local-fs.target. Sep 6 00:20:23.337362 systemd[1]: Starting ldconfig.service... Sep 6 00:20:23.338407 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:23.338478 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:23.340491 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:20:23.343004 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:20:23.346607 systemd[1]: Starting systemd-sysext.service... Sep 6 00:20:23.363190 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Sep 6 00:20:23.366863 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:20:23.372078 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:20:23.379299 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:20:23.379560 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:20:23.402764 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:20:23.472862 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:20:23.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.476037 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:20:23.479829 kernel: audit: type=1130 audit(1757118023.475:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.510180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:20:23.522881 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Sep 6 00:20:23.522881 systemd-fsck[1113]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:20:23.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.523920 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:20:23.526031 systemd[1]: Mounting boot.mount... Sep 6 00:20:23.529056 kernel: audit: type=1130 audit(1757118023.523:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.537774 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:20:23.542940 systemd[1]: Mounted boot.mount. Sep 6 00:20:23.565741 (sd-sysext)[1120]: Using extensions 'kubernetes'. Sep 6 00:20:23.566154 (sd-sysext)[1120]: Merged extensions into '/usr'. Sep 6 00:20:23.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.582471 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:20:23.585735 kernel: audit: type=1130 audit(1757118023.582:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.599557 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:23.601782 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:20:23.607397 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:23.609497 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:23.611337 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:23.615222 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:23.618651 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:23.618827 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:23.618937 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:23.622054 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:20:23.627696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:23.627907 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:23.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.629066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:23.629237 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:23.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.634681 kernel: audit: type=1130 audit(1757118023.627:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.634776 kernel: audit: type=1131 audit(1757118023.627:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.636660 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:23.637070 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:23.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.639769 kernel: audit: type=1130 audit(1757118023.635:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.643257 systemd[1]: Finished systemd-sysext.service. Sep 6 00:20:23.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:23.651513 systemd[1]: Starting ensure-sysext.service... Sep 6 00:20:23.652298 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:23.652508 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:23.655106 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:20:23.667961 systemd[1]: Reloading. Sep 6 00:20:23.690112 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:20:23.693591 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:20:23.698599 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:20:23.816070 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:20:23.816594 /usr/lib/systemd/system-generators/torcx-generator[1156]: time="2025-09-06T00:20:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:23.816642 /usr/lib/systemd/system-generators/torcx-generator[1156]: time="2025-09-06T00:20:23Z" level=info msg="torcx already run" Sep 6 00:20:23.939849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:23.940073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:23.969578 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:24.038056 systemd[1]: Finished ldconfig.service. Sep 6 00:20:24.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.041021 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:20:24.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.045044 systemd[1]: Starting audit-rules.service... Sep 6 00:20:24.047871 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:20:24.050376 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:20:24.054859 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:24.062533 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:20:24.075222 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:20:24.077090 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:20:24.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.081139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:24.089470 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.092054 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:24.098030 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:24.101895 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:24.102805 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.102976 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:24.103111 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:24.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.105535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:24.105742 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:24.106620 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:24.106812 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:24.107436 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:24.111264 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.113159 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:24.118594 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:24.120641 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.120840 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:24.121007 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:24.126898 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.130139 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:24.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.133913 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.134067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:24.135916 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:20:24.136450 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:24.137493 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:24.137707 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:24.140000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.143194 systemd[1]: Finished ensure-sysext.service. Sep 6 00:20:24.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.147048 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:20:24.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.148272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:24.148482 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:24.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.158079 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:24.158258 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:24.158737 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.159372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:24.159555 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:24.160125 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:24.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.181588 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:20:24.183600 systemd[1]: Starting systemd-update-done.service... Sep 6 00:20:24.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:24.197823 systemd[1]: Finished systemd-update-done.service. Sep 6 00:20:24.220000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:20:24.220000 audit[1252]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcca010e70 a2=420 a3=0 items=0 ppid=1211 pid=1252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:24.220000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:20:24.221790 systemd[1]: Finished audit-rules.service. Sep 6 00:20:24.222752 augenrules[1252]: No rules Sep 6 00:20:24.222968 systemd-networkd[1050]: eth0: Gained IPv6LL Sep 6 00:20:24.228970 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:20:24.251409 systemd-resolved[1215]: Positive Trust Anchors: Sep 6 00:20:24.252059 systemd-resolved[1215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:24.252211 systemd-resolved[1215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:24.259193 systemd-resolved[1215]: Using system hostname 'ci-3510.3.8-n-0d6cc4df9c'. Sep 6 00:20:24.261833 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:24.262389 systemd[1]: Reached target network.target. Sep 6 00:20:24.262745 systemd[1]: Reached target network-online.target. Sep 6 00:20:24.263072 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:24.263423 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:20:24.263975 systemd[1]: Reached target sysinit.target. Sep 6 00:20:24.264493 systemd[1]: Started motdgen.path. Sep 6 00:20:24.264818 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:20:24.265161 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:20:24.265452 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:20:24.265481 systemd[1]: Reached target paths.target. Sep 6 00:20:24.265820 systemd[1]: Reached target time-set.target. Sep 6 00:20:24.266346 systemd[1]: Started logrotate.timer. Sep 6 00:20:24.266820 systemd[1]: Started mdadm.timer. Sep 6 00:20:24.267153 systemd[1]: Reached target timers.target. Sep 6 00:20:24.268025 systemd[1]: Listening on dbus.socket. Sep 6 00:20:24.270266 systemd[1]: Starting docker.socket... Sep 6 00:20:24.272200 systemd[1]: Listening on sshd.socket. Sep 6 00:20:24.272643 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:24.273066 systemd[1]: Listening on docker.socket. Sep 6 00:20:24.273458 systemd[1]: Reached target sockets.target. Sep 6 00:20:24.274182 systemd[1]: Reached target basic.target. Sep 6 00:20:24.275656 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:20:24.276855 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.276924 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:24.278685 systemd[1]: Starting containerd.service... Sep 6 00:20:24.282431 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:20:24.284374 systemd[1]: Starting dbus.service... Sep 6 00:20:24.288675 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:20:24.294631 systemd[1]: Starting extend-filesystems.service... Sep 6 00:20:24.295244 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:20:24.297442 systemd[1]: Starting kubelet.service... Sep 6 00:20:24.299465 systemd[1]: Starting motdgen.service... Sep 6 00:20:24.300064 jq[1266]: false Sep 6 00:20:24.301999 systemd[1]: Starting prepare-helm.service... Sep 6 00:20:24.304765 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:20:24.314025 systemd[1]: Starting sshd-keygen.service... Sep 6 00:20:24.319104 systemd[1]: Starting systemd-logind.service... Sep 6 00:20:24.319849 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:24.319955 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:20:24.323309 systemd[1]: Starting update-engine.service... Sep 6 00:20:24.325896 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:20:24.340421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:20:24.340732 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:20:24.347957 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:24.348024 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:24.355289 systemd-timesyncd[1216]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Sep 6 00:20:24.355748 systemd-timesyncd[1216]: Initial clock synchronization to Sat 2025-09-06 00:20:24.657136 UTC. Sep 6 00:20:24.365309 jq[1281]: true Sep 6 00:20:24.372155 tar[1285]: linux-amd64/helm Sep 6 00:20:24.385386 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:20:24.385655 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:20:24.396425 dbus-daemon[1265]: [system] SELinux support is enabled Sep 6 00:20:24.397144 jq[1294]: true Sep 6 00:20:24.399579 systemd[1]: Started dbus.service. Sep 6 00:20:24.402232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:20:24.402264 systemd[1]: Reached target system-config.target. Sep 6 00:20:24.402682 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:20:24.402729 systemd[1]: Reached target user-config.target. Sep 6 00:20:24.463950 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:20:24.464226 systemd[1]: Finished motdgen.service. Sep 6 00:20:24.497928 extend-filesystems[1267]: Found loop1 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda1 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda2 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda3 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found usr Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda4 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda6 Sep 6 00:20:24.501067 extend-filesystems[1267]: Found vda7 Sep 6 00:20:24.509162 extend-filesystems[1267]: Found vda9 Sep 6 00:20:24.509162 extend-filesystems[1267]: Checking size of /dev/vda9 Sep 6 00:20:24.506866 systemd[1]: Started update-engine.service. Sep 6 00:20:24.510334 update_engine[1279]: I0906 00:20:24.501641 1279 main.cc:92] Flatcar Update Engine starting Sep 6 00:20:24.510334 update_engine[1279]: I0906 00:20:24.507419 1279 update_check_scheduler.cc:74] Next update check in 6m9s Sep 6 00:20:24.510279 systemd[1]: Started locksmithd.service. Sep 6 00:20:24.528003 bash[1322]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:24.530088 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:20:24.542051 systemd-networkd[1050]: eth1: Gained IPv6LL Sep 6 00:20:24.556581 extend-filesystems[1267]: Resized partition /dev/vda9 Sep 6 00:20:24.571191 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:20:24.577275 env[1286]: time="2025-09-06T00:20:24.577202703Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:20:24.579764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 6 00:20:24.648283 coreos-metadata[1264]: Sep 06 00:20:24.647 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:24.655470 systemd-logind[1277]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:20:24.655517 systemd-logind[1277]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:20:24.657950 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 00:20:24.659483 systemd-logind[1277]: New seat seat0. Sep 6 00:20:24.668982 systemd[1]: Started systemd-logind.service. Sep 6 00:20:24.680973 coreos-metadata[1264]: Sep 06 00:20:24.667 INFO Fetch successful Sep 6 00:20:24.681912 unknown[1264]: wrote ssh authorized keys file for user: core Sep 6 00:20:24.684727 extend-filesystems[1330]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:20:24.684727 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 00:20:24.684727 extend-filesystems[1330]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 00:20:24.684619 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:20:24.687928 extend-filesystems[1267]: Resized filesystem in /dev/vda9 Sep 6 00:20:24.687928 extend-filesystems[1267]: Found vdb Sep 6 00:20:24.684895 systemd[1]: Finished extend-filesystems.service. Sep 6 00:20:24.703021 env[1286]: time="2025-09-06T00:20:24.702965125Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:20:24.704234 env[1286]: time="2025-09-06T00:20:24.704196314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.706196 update-ssh-keys[1336]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:24.706780 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:20:24.720524 env[1286]: time="2025-09-06T00:20:24.720471728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:24.720805 env[1286]: time="2025-09-06T00:20:24.720776374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.721311 env[1286]: time="2025-09-06T00:20:24.721265104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:24.721450 env[1286]: time="2025-09-06T00:20:24.721428617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.721778 env[1286]: time="2025-09-06T00:20:24.721753543Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:20:24.721874 env[1286]: time="2025-09-06T00:20:24.721858896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.722026 env[1286]: time="2025-09-06T00:20:24.722011231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.722512 env[1286]: time="2025-09-06T00:20:24.722491317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:24.722972 env[1286]: time="2025-09-06T00:20:24.722948728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:24.723052 env[1286]: time="2025-09-06T00:20:24.723037663Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:20:24.723215 env[1286]: time="2025-09-06T00:20:24.723198849Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:20:24.723428 env[1286]: time="2025-09-06T00:20:24.723412016Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:20:24.726779 env[1286]: time="2025-09-06T00:20:24.726745479Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:20:24.726903 env[1286]: time="2025-09-06T00:20:24.726887115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:20:24.726988 env[1286]: time="2025-09-06T00:20:24.726973422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:20:24.727072 env[1286]: time="2025-09-06T00:20:24.727058329Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727135 env[1286]: time="2025-09-06T00:20:24.727121594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727197 env[1286]: time="2025-09-06T00:20:24.727184285Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727264 env[1286]: time="2025-09-06T00:20:24.727251803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727326 env[1286]: time="2025-09-06T00:20:24.727312784Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727387 env[1286]: time="2025-09-06T00:20:24.727374175Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727454 env[1286]: time="2025-09-06T00:20:24.727440295Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727515 env[1286]: time="2025-09-06T00:20:24.727502899Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.727575 env[1286]: time="2025-09-06T00:20:24.727562489Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:20:24.727777 env[1286]: time="2025-09-06T00:20:24.727760226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:20:24.727938 env[1286]: time="2025-09-06T00:20:24.727923394Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:20:24.728330 env[1286]: time="2025-09-06T00:20:24.728308374Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:20:24.728539 env[1286]: time="2025-09-06T00:20:24.728520302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.728691 env[1286]: time="2025-09-06T00:20:24.728674739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:20:24.728813 env[1286]: time="2025-09-06T00:20:24.728797981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.729425 env[1286]: time="2025-09-06T00:20:24.729403118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.729585 env[1286]: time="2025-09-06T00:20:24.729568286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.729657 env[1286]: time="2025-09-06T00:20:24.729644280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.729803 env[1286]: time="2025-09-06T00:20:24.729786554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.729884 env[1286]: time="2025-09-06T00:20:24.729871112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.730000 env[1286]: time="2025-09-06T00:20:24.729986811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.730069 env[1286]: time="2025-09-06T00:20:24.730056177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.730198 env[1286]: time="2025-09-06T00:20:24.730181911Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:20:24.730618 env[1286]: time="2025-09-06T00:20:24.730549651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.731019 env[1286]: time="2025-09-06T00:20:24.730996225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.731337 env[1286]: time="2025-09-06T00:20:24.731320205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.731410 env[1286]: time="2025-09-06T00:20:24.731396719Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:20:24.731737 env[1286]: time="2025-09-06T00:20:24.731464727Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:20:24.731875 env[1286]: time="2025-09-06T00:20:24.731858232Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:20:24.731973 env[1286]: time="2025-09-06T00:20:24.731957117Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:20:24.732127 env[1286]: time="2025-09-06T00:20:24.732108579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:20:24.733045 env[1286]: time="2025-09-06T00:20:24.732985764Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:20:24.734683 env[1286]: time="2025-09-06T00:20:24.734080141Z" level=info msg="Connect containerd service" Sep 6 00:20:24.734683 env[1286]: time="2025-09-06T00:20:24.734139482Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:20:24.736306 env[1286]: time="2025-09-06T00:20:24.736269896Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:20:24.737798 env[1286]: time="2025-09-06T00:20:24.737776737Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:20:24.738007 env[1286]: time="2025-09-06T00:20:24.737934656Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:20:24.738148 env[1286]: time="2025-09-06T00:20:24.738134380Z" level=info msg="containerd successfully booted in 0.161946s" Sep 6 00:20:24.738280 systemd[1]: Started containerd.service. Sep 6 00:20:24.750803 env[1286]: time="2025-09-06T00:20:24.750702379Z" level=info msg="Start subscribing containerd event" Sep 6 00:20:24.755356 env[1286]: time="2025-09-06T00:20:24.755309011Z" level=info msg="Start recovering state" Sep 6 00:20:24.755699 env[1286]: time="2025-09-06T00:20:24.755624314Z" level=info msg="Start event monitor" Sep 6 00:20:24.755979 env[1286]: time="2025-09-06T00:20:24.755958216Z" level=info msg="Start snapshots syncer" Sep 6 00:20:24.756078 env[1286]: time="2025-09-06T00:20:24.756061726Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:20:24.756457 env[1286]: time="2025-09-06T00:20:24.756436631Z" level=info msg="Start streaming server" Sep 6 00:20:25.457684 tar[1285]: linux-amd64/LICENSE Sep 6 00:20:25.458119 tar[1285]: linux-amd64/README.md Sep 6 00:20:25.464398 systemd[1]: Finished prepare-helm.service. Sep 6 00:20:25.509807 locksmithd[1324]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:20:26.012939 systemd[1]: Started kubelet.service. Sep 6 00:20:26.021352 sshd_keygen[1295]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:20:26.059231 systemd[1]: Finished sshd-keygen.service. Sep 6 00:20:26.061869 systemd[1]: Starting issuegen.service... Sep 6 00:20:26.071087 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:20:26.071342 systemd[1]: Finished issuegen.service. Sep 6 00:20:26.073558 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:20:26.088621 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:20:26.091146 systemd[1]: Started getty@tty1.service. Sep 6 00:20:26.093640 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:20:26.094435 systemd[1]: Reached target getty.target. Sep 6 00:20:26.094852 systemd[1]: Reached target multi-user.target. Sep 6 00:20:26.097283 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:20:26.109703 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:20:26.110029 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:20:26.112540 systemd[1]: Startup finished in 5.698s (kernel) + 7.526s (userspace) = 13.225s. Sep 6 00:20:26.722951 kubelet[1353]: E0906 00:20:26.722886 1353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:26.725288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:26.725505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:27.255849 systemd[1]: Created slice system-sshd.slice. Sep 6 00:20:27.258155 systemd[1]: Started sshd@0-64.227.108.127:22-147.75.109.163:55188.service. Sep 6 00:20:27.325959 sshd[1380]: Accepted publickey for core from 147.75.109.163 port 55188 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:27.328809 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.339089 systemd[1]: Created slice user-500.slice. Sep 6 00:20:27.340317 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:20:27.346161 systemd-logind[1277]: New session 1 of user core. Sep 6 00:20:27.352855 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:20:27.354934 systemd[1]: Starting user@500.service... Sep 6 00:20:27.365965 (systemd)[1385]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.457960 systemd[1385]: Queued start job for default target default.target. Sep 6 00:20:27.458656 systemd[1385]: Reached target paths.target. Sep 6 00:20:27.458826 systemd[1385]: Reached target sockets.target. Sep 6 00:20:27.458954 systemd[1385]: Reached target timers.target. Sep 6 00:20:27.459194 systemd[1385]: Reached target basic.target. Sep 6 00:20:27.459413 systemd[1]: Started user@500.service. Sep 6 00:20:27.460422 systemd[1]: Started session-1.scope. Sep 6 00:20:27.461274 systemd[1385]: Reached target default.target. Sep 6 00:20:27.461673 systemd[1385]: Startup finished in 86ms. Sep 6 00:20:27.524711 systemd[1]: Started sshd@1-64.227.108.127:22-147.75.109.163:55200.service. Sep 6 00:20:27.578132 sshd[1394]: Accepted publickey for core from 147.75.109.163 port 55200 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:27.578781 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.584607 systemd-logind[1277]: New session 2 of user core. Sep 6 00:20:27.585077 systemd[1]: Started session-2.scope. Sep 6 00:20:27.651150 sshd[1394]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:27.656133 systemd[1]: Started sshd@2-64.227.108.127:22-147.75.109.163:55210.service. Sep 6 00:20:27.656925 systemd[1]: sshd@1-64.227.108.127:22-147.75.109.163:55200.service: Deactivated successfully. Sep 6 00:20:27.662355 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:20:27.663169 systemd-logind[1277]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:20:27.665470 systemd-logind[1277]: Removed session 2. Sep 6 00:20:27.707578 sshd[1400]: Accepted publickey for core from 147.75.109.163 port 55210 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:27.709926 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.716065 systemd[1]: Started session-3.scope. Sep 6 00:20:27.716368 systemd-logind[1277]: New session 3 of user core. Sep 6 00:20:27.776865 sshd[1400]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:27.780812 systemd[1]: Started sshd@3-64.227.108.127:22-147.75.109.163:55212.service. Sep 6 00:20:27.784136 systemd[1]: sshd@2-64.227.108.127:22-147.75.109.163:55210.service: Deactivated successfully. Sep 6 00:20:27.785951 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:20:27.786618 systemd-logind[1277]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:20:27.788384 systemd-logind[1277]: Removed session 3. Sep 6 00:20:27.825048 sshd[1406]: Accepted publickey for core from 147.75.109.163 port 55212 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:27.826703 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.831699 systemd-logind[1277]: New session 4 of user core. Sep 6 00:20:27.832845 systemd[1]: Started session-4.scope. Sep 6 00:20:27.898079 sshd[1406]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:27.903161 systemd[1]: Started sshd@4-64.227.108.127:22-147.75.109.163:55222.service. Sep 6 00:20:27.904166 systemd[1]: sshd@3-64.227.108.127:22-147.75.109.163:55212.service: Deactivated successfully. Sep 6 00:20:27.905438 systemd-logind[1277]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:20:27.905664 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:20:27.906905 systemd-logind[1277]: Removed session 4. Sep 6 00:20:27.950778 sshd[1413]: Accepted publickey for core from 147.75.109.163 port 55222 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:27.952789 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:27.960290 systemd[1]: Started session-5.scope. Sep 6 00:20:27.960595 systemd-logind[1277]: New session 5 of user core. Sep 6 00:20:28.034412 sudo[1419]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:20:28.034705 sudo[1419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:28.047394 dbus-daemon[1265]: \xd0\u001d\xb7I+V: received setenforce notice (enforcing=-1639025024) Sep 6 00:20:28.048401 sudo[1419]: pam_unix(sudo:session): session closed for user root Sep 6 00:20:28.054945 sshd[1413]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:28.059383 systemd[1]: Started sshd@5-64.227.108.127:22-147.75.109.163:55234.service. Sep 6 00:20:28.065227 systemd[1]: sshd@4-64.227.108.127:22-147.75.109.163:55222.service: Deactivated successfully. Sep 6 00:20:28.070038 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:20:28.071022 systemd-logind[1277]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:20:28.073003 systemd-logind[1277]: Removed session 5. Sep 6 00:20:28.107701 sshd[1421]: Accepted publickey for core from 147.75.109.163 port 55234 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:28.110391 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:28.117363 systemd[1]: Started session-6.scope. Sep 6 00:20:28.118135 systemd-logind[1277]: New session 6 of user core. Sep 6 00:20:28.186065 sudo[1428]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:20:28.187025 sudo[1428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:28.191376 sudo[1428]: pam_unix(sudo:session): session closed for user root Sep 6 00:20:28.198023 sudo[1427]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:20:28.198270 sudo[1427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:28.211682 systemd[1]: Stopping audit-rules.service... Sep 6 00:20:28.213000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:20:28.214827 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:20:28.214871 kernel: audit: type=1305 audit(1757118028.213:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:20:28.216842 kernel: audit: type=1300 audit(1757118028.213:164): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc596ca40 a2=420 a3=0 items=0 ppid=1 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.213000 audit[1431]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc596ca40 a2=420 a3=0 items=0 ppid=1 pid=1431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.217067 auditctl[1431]: No rules Sep 6 00:20:28.217578 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:20:28.217836 systemd[1]: Stopped audit-rules.service. Sep 6 00:20:28.220513 systemd[1]: Starting audit-rules.service... Sep 6 00:20:28.225383 kernel: audit: type=1327 audit(1757118028.213:164): proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:20:28.225499 kernel: audit: type=1131 audit(1757118028.215:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.213000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:20:28.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.252189 augenrules[1449]: No rules Sep 6 00:20:28.263297 kernel: audit: type=1130 audit(1757118028.252:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.263416 kernel: audit: type=1106 audit(1757118028.253:167): pid=1427 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.253000 audit[1427]: USER_END pid=1427 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.254497 sudo[1427]: pam_unix(sudo:session): session closed for user root Sep 6 00:20:28.253366 systemd[1]: Finished audit-rules.service. Sep 6 00:20:28.260900 sshd[1421]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:28.253000 audit[1427]: CRED_DISP pid=1427 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.265366 systemd[1]: Started sshd@6-64.227.108.127:22-147.75.109.163:55246.service. Sep 6 00:20:28.267812 kernel: audit: type=1104 audit(1757118028.253:168): pid=1427 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-64.227.108.127:22-147.75.109.163:55246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.272824 kernel: audit: type=1130 audit(1757118028.262:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-64.227.108.127:22-147.75.109.163:55246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.272000 audit[1421]: USER_END pid=1421 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.274553 systemd[1]: sshd@5-64.227.108.127:22-147.75.109.163:55234.service: Deactivated successfully. Sep 6 00:20:28.275435 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:20:28.278982 kernel: audit: type=1106 audit(1757118028.272:170): pid=1421 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.284363 kernel: audit: type=1104 audit(1757118028.272:171): pid=1421 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.272000 audit[1421]: CRED_DISP pid=1421 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-64.227.108.127:22-147.75.109.163:55234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.285177 systemd-logind[1277]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:20:28.287872 systemd-logind[1277]: Removed session 6. Sep 6 00:20:28.316000 audit[1454]: USER_ACCT pid=1454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.317571 sshd[1454]: Accepted publickey for core from 147.75.109.163 port 55246 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:20:28.318000 audit[1454]: CRED_ACQ pid=1454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.318000 audit[1454]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe02eaf550 a2=3 a3=0 items=0 ppid=1 pid=1454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.318000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:20:28.320148 sshd[1454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:28.326332 systemd[1]: Started session-7.scope. Sep 6 00:20:28.326822 systemd-logind[1277]: New session 7 of user core. Sep 6 00:20:28.332000 audit[1454]: USER_START pid=1454 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.335000 audit[1459]: CRED_ACQ pid=1459 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:20:28.389000 audit[1460]: USER_ACCT pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.390101 sudo[1460]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:20:28.389000 audit[1460]: CRED_REFR pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.390516 sudo[1460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:28.392000 audit[1460]: USER_START pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.421428 systemd[1]: Starting docker.service... Sep 6 00:20:28.481770 env[1470]: time="2025-09-06T00:20:28.481686093Z" level=info msg="Starting up" Sep 6 00:20:28.485810 env[1470]: time="2025-09-06T00:20:28.485731516Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:28.485810 env[1470]: time="2025-09-06T00:20:28.485779443Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:28.485810 env[1470]: time="2025-09-06T00:20:28.485811108Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:28.486067 env[1470]: time="2025-09-06T00:20:28.485824348Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:28.488314 env[1470]: time="2025-09-06T00:20:28.488274299Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:28.488490 env[1470]: time="2025-09-06T00:20:28.488467439Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:28.488593 env[1470]: time="2025-09-06T00:20:28.488569871Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:28.488696 env[1470]: time="2025-09-06T00:20:28.488677103Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:28.501574 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1354316312-merged.mount: Deactivated successfully. Sep 6 00:20:28.590788 env[1470]: time="2025-09-06T00:20:28.590635435Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:20:28.590788 env[1470]: time="2025-09-06T00:20:28.590675209Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:20:28.591657 env[1470]: time="2025-09-06T00:20:28.591612054Z" level=info msg="Loading containers: start." Sep 6 00:20:28.669000 audit[1500]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.669000 audit[1500]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcac16d950 a2=0 a3=7ffcac16d93c items=0 ppid=1470 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.669000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 00:20:28.673000 audit[1502]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.673000 audit[1502]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedb96ee40 a2=0 a3=7ffedb96ee2c items=0 ppid=1470 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.673000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 00:20:28.676000 audit[1504]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.676000 audit[1504]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffca5e10470 a2=0 a3=7ffca5e1045c items=0 ppid=1470 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.676000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:20:28.679000 audit[1506]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.679000 audit[1506]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7258a4d0 a2=0 a3=7ffc7258a4bc items=0 ppid=1470 pid=1506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.679000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:20:28.682000 audit[1508]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.682000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6bc1b2f0 a2=0 a3=7ffe6bc1b2dc items=0 ppid=1470 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.682000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 6 00:20:28.701000 audit[1513]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.701000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc17d03e50 a2=0 a3=7ffc17d03e3c items=0 ppid=1470 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.701000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 6 00:20:28.709000 audit[1515]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.709000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdbd3ba420 a2=0 a3=7ffdbd3ba40c items=0 ppid=1470 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.709000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 6 00:20:28.714000 audit[1517]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.714000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed6d377d0 a2=0 a3=7ffed6d377bc items=0 ppid=1470 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.714000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 6 00:20:28.716000 audit[1519]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.716000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe0c9e0070 a2=0 a3=7ffe0c9e005c items=0 ppid=1470 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.716000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:20:28.725000 audit[1523]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.725000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff8b0aa640 a2=0 a3=7fff8b0aa62c items=0 ppid=1470 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.725000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:20:28.731000 audit[1524]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.731000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffbea53020 a2=0 a3=7fffbea5300c items=0 ppid=1470 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.731000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:20:28.745764 kernel: Initializing XFRM netlink socket Sep 6 00:20:28.793766 env[1470]: time="2025-09-06T00:20:28.793688047Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:20:28.832000 audit[1532]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.832000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc13069300 a2=0 a3=7ffc130692ec items=0 ppid=1470 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.832000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 6 00:20:28.847000 audit[1535]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.847000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdbedcf500 a2=0 a3=7ffdbedcf4ec items=0 ppid=1470 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 6 00:20:28.852000 audit[1538]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.852000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd4ebb650 a2=0 a3=7ffcd4ebb63c items=0 ppid=1470 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 6 00:20:28.855000 audit[1540]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.855000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe29018d40 a2=0 a3=7ffe29018d2c items=0 ppid=1470 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.855000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 6 00:20:28.858000 audit[1542]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.858000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffce2760450 a2=0 a3=7ffce276043c items=0 ppid=1470 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.858000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 6 00:20:28.860000 audit[1544]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.860000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffe7f867f0 a2=0 a3=7fffe7f867dc items=0 ppid=1470 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.860000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 6 00:20:28.862000 audit[1546]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.862000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffffb6ec5b0 a2=0 a3=7ffffb6ec59c items=0 ppid=1470 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.862000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 6 00:20:28.875000 audit[1549]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.875000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe826a62a0 a2=0 a3=7ffe826a628c items=0 ppid=1470 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.875000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 6 00:20:28.880000 audit[1551]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.880000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff4a93a760 a2=0 a3=7fff4a93a74c items=0 ppid=1470 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.880000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:20:28.884000 audit[1553]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.884000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe6022aa60 a2=0 a3=7ffe6022aa4c items=0 ppid=1470 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.884000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:20:28.888000 audit[1555]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.888000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd100b3610 a2=0 a3=7ffd100b35fc items=0 ppid=1470 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.888000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 6 00:20:28.889615 systemd-networkd[1050]: docker0: Link UP Sep 6 00:20:28.901000 audit[1559]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.901000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd05a51b20 a2=0 a3=7ffd05a51b0c items=0 ppid=1470 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.901000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:20:28.906000 audit[1560]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:28.906000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc8bbe5530 a2=0 a3=7ffc8bbe551c items=0 ppid=1470 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:28.906000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:20:28.908599 env[1470]: time="2025-09-06T00:20:28.908549855Z" level=info msg="Loading containers: done." Sep 6 00:20:28.926639 env[1470]: time="2025-09-06T00:20:28.923434502Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:20:28.926639 env[1470]: time="2025-09-06T00:20:28.923893737Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:20:28.926639 env[1470]: time="2025-09-06T00:20:28.924086573Z" level=info msg="Daemon has completed initialization" Sep 6 00:20:28.941493 systemd[1]: Started docker.service. Sep 6 00:20:28.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.951941 env[1470]: time="2025-09-06T00:20:28.951881455Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:20:28.984448 systemd[1]: Starting coreos-metadata.service... Sep 6 00:20:29.031138 coreos-metadata[1586]: Sep 06 00:20:29.030 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:29.043022 coreos-metadata[1586]: Sep 06 00:20:29.042 INFO Fetch successful Sep 6 00:20:29.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:29.057677 systemd[1]: Finished coreos-metadata.service. Sep 6 00:20:30.022564 env[1286]: time="2025-09-06T00:20:30.022504691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:20:30.499979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799406343.mount: Deactivated successfully. Sep 6 00:20:31.915890 env[1286]: time="2025-09-06T00:20:31.915831307Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.917652 env[1286]: time="2025-09-06T00:20:31.917579341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.920596 env[1286]: time="2025-09-06T00:20:31.920545241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.921648 env[1286]: time="2025-09-06T00:20:31.921611199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:31.922413 env[1286]: time="2025-09-06T00:20:31.922378597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:20:31.923230 env[1286]: time="2025-09-06T00:20:31.923202882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:20:33.661771 env[1286]: time="2025-09-06T00:20:33.661685147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:33.663076 env[1286]: time="2025-09-06T00:20:33.663027776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:33.664934 env[1286]: time="2025-09-06T00:20:33.664898234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:33.666496 env[1286]: time="2025-09-06T00:20:33.666464127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:33.667299 env[1286]: time="2025-09-06T00:20:33.667264645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:20:33.668134 env[1286]: time="2025-09-06T00:20:33.668107933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:20:34.973132 env[1286]: time="2025-09-06T00:20:34.973071334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:34.974554 env[1286]: time="2025-09-06T00:20:34.974511034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:34.976351 env[1286]: time="2025-09-06T00:20:34.976316904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:34.978224 env[1286]: time="2025-09-06T00:20:34.978191116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:34.979184 env[1286]: time="2025-09-06T00:20:34.979143120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:20:34.980028 env[1286]: time="2025-09-06T00:20:34.980001806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:20:36.175784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983540290.mount: Deactivated successfully. Sep 6 00:20:36.858031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:20:36.864827 kernel: kauditd_printk_skb: 85 callbacks suppressed Sep 6 00:20:36.864909 kernel: audit: type=1130 audit(1757118036.857:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:36.864938 kernel: audit: type=1131 audit(1757118036.857:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:36.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:36.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:36.858265 systemd[1]: Stopped kubelet.service. Sep 6 00:20:36.860286 systemd[1]: Starting kubelet.service... Sep 6 00:20:36.977470 env[1286]: time="2025-09-06T00:20:36.977409128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:36.980004 env[1286]: time="2025-09-06T00:20:36.979782344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:36.981263 env[1286]: time="2025-09-06T00:20:36.980819188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:36.982052 env[1286]: time="2025-09-06T00:20:36.981851159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:36.982910 env[1286]: time="2025-09-06T00:20:36.982862301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:20:36.984798 env[1286]: time="2025-09-06T00:20:36.983752799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:20:37.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:37.011979 systemd[1]: Started kubelet.service. Sep 6 00:20:37.015753 kernel: audit: type=1130 audit(1757118037.012:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:37.076545 kubelet[1615]: E0906 00:20:37.076499 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:37.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:20:37.079973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:37.080184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:37.084215 kernel: audit: type=1131 audit(1757118037.079:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:20:37.551865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845043232.mount: Deactivated successfully. Sep 6 00:20:38.604029 env[1286]: time="2025-09-06T00:20:38.603973985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:38.607152 env[1286]: time="2025-09-06T00:20:38.607114077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:38.609783 env[1286]: time="2025-09-06T00:20:38.609750376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:38.612306 env[1286]: time="2025-09-06T00:20:38.612271470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:38.623404 env[1286]: time="2025-09-06T00:20:38.623352042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:20:38.626043 env[1286]: time="2025-09-06T00:20:38.626005225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:20:39.190492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485124200.mount: Deactivated successfully. Sep 6 00:20:39.194542 env[1286]: time="2025-09-06T00:20:39.194478990Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:39.196203 env[1286]: time="2025-09-06T00:20:39.196165635Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:39.197892 env[1286]: time="2025-09-06T00:20:39.197860732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:39.200114 env[1286]: time="2025-09-06T00:20:39.200067109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:39.200414 env[1286]: time="2025-09-06T00:20:39.200383967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:20:39.201411 env[1286]: time="2025-09-06T00:20:39.201378941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:20:39.688463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420893387.mount: Deactivated successfully. Sep 6 00:20:41.992757 env[1286]: time="2025-09-06T00:20:41.992655779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:41.994941 env[1286]: time="2025-09-06T00:20:41.994902824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:41.996976 env[1286]: time="2025-09-06T00:20:41.996944410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:41.999031 env[1286]: time="2025-09-06T00:20:41.999001475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:42.000119 env[1286]: time="2025-09-06T00:20:42.000080489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:20:44.354691 systemd[1]: Stopped kubelet.service. Sep 6 00:20:44.360435 kernel: audit: type=1130 audit(1757118044.354:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.360570 kernel: audit: type=1131 audit(1757118044.354:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.358010 systemd[1]: Starting kubelet.service... Sep 6 00:20:44.403866 systemd[1]: Reloading. Sep 6 00:20:44.524400 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-09-06T00:20:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:44.524936 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-09-06T00:20:44Z" level=info msg="torcx already run" Sep 6 00:20:44.657639 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:44.657663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:44.685149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:44.782949 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:20:44.783040 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:20:44.783439 systemd[1]: Stopped kubelet.service. Sep 6 00:20:44.786786 kernel: audit: type=1130 audit(1757118044.782:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:20:44.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:20:44.786066 systemd[1]: Starting kubelet.service... Sep 6 00:20:44.909637 systemd[1]: Started kubelet.service. Sep 6 00:20:44.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.915759 kernel: audit: type=1130 audit(1757118044.910:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:44.971105 kubelet[1732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:44.971105 kubelet[1732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:20:44.971105 kubelet[1732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:44.971765 kubelet[1732]: I0906 00:20:44.971156 1732 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:20:45.288872 kubelet[1732]: I0906 00:20:45.288427 1732 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:20:45.288872 kubelet[1732]: I0906 00:20:45.288471 1732 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:20:45.288872 kubelet[1732]: I0906 00:20:45.288769 1732 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:20:45.313582 kubelet[1732]: E0906 00:20:45.313542 1732 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.108.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:45.315998 kubelet[1732]: I0906 00:20:45.315937 1732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:20:45.326146 kubelet[1732]: E0906 00:20:45.326111 1732 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:20:45.326340 kubelet[1732]: I0906 00:20:45.326325 1732 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:20:45.330746 kubelet[1732]: I0906 00:20:45.330689 1732 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:20:45.331922 kubelet[1732]: I0906 00:20:45.331900 1732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:20:45.332260 kubelet[1732]: I0906 00:20:45.332218 1732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:20:45.332559 kubelet[1732]: I0906 00:20:45.332349 1732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0d6cc4df9c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:20:45.332755 kubelet[1732]: I0906 00:20:45.332740 1732 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:20:45.332827 kubelet[1732]: I0906 00:20:45.332815 1732 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:20:45.333005 kubelet[1732]: I0906 00:20:45.332993 1732 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:45.337093 kubelet[1732]: I0906 00:20:45.337049 1732 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:20:45.337284 kubelet[1732]: I0906 00:20:45.337263 1732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:20:45.337405 kubelet[1732]: I0906 00:20:45.337389 1732 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:20:45.337541 kubelet[1732]: I0906 00:20:45.337527 1732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:20:45.347327 kubelet[1732]: I0906 00:20:45.347291 1732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:20:45.348344 kubelet[1732]: I0906 00:20:45.347780 1732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:20:45.349091 kubelet[1732]: W0906 00:20:45.349045 1732 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:20:45.352509 kubelet[1732]: I0906 00:20:45.352473 1732 server.go:1274] "Started kubelet" Sep 6 00:20:45.352746 kubelet[1732]: W0906 00:20:45.352673 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0d6cc4df9c&limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:45.352800 kubelet[1732]: E0906 00:20:45.352766 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0d6cc4df9c&limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:45.354873 kubelet[1732]: W0906 00:20:45.354816 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:45.355005 kubelet[1732]: E0906 00:20:45.354986 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:45.355197 kubelet[1732]: I0906 00:20:45.355164 1732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:20:45.355613 kubelet[1732]: I0906 00:20:45.355581 1732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:20:45.357973 kubelet[1732]: I0906 00:20:45.357939 1732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:20:45.359046 kubelet[1732]: I0906 00:20:45.359028 1732 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:20:45.363000 audit[1732]: AVC avc: denied { mac_admin } for pid=1732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:45.364057 kubelet[1732]: I0906 00:20:45.364028 1732 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:20:45.364170 kubelet[1732]: I0906 00:20:45.364155 1732 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:20:45.364411 kubelet[1732]: I0906 00:20:45.364395 1732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:20:45.363000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:45.367518 kernel: audit: type=1400 audit(1757118045.363:215): avc: denied { mac_admin } for pid=1732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:45.367603 kernel: audit: type=1401 audit(1757118045.363:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:45.367628 kernel: audit: type=1300 audit(1757118045.363:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000a2b200 a1=c00089b4b8 a2=c000a2b1d0 a3=25 items=0 ppid=1 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.363000 audit[1732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a2b200 a1=c00089b4b8 a2=c000a2b1d0 a3=25 items=0 ppid=1 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:45.374178 kernel: audit: type=1327 audit(1757118045.363:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:45.374895 kubelet[1732]: I0906 00:20:45.374870 1732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:20:45.377239 kubelet[1732]: I0906 00:20:45.377217 1732 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:20:45.377652 kubelet[1732]: E0906 00:20:45.377630 1732 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0d6cc4df9c\" not found" Sep 6 00:20:45.378296 kubelet[1732]: I0906 00:20:45.378276 1732 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:20:45.378564 kubelet[1732]: I0906 00:20:45.378551 1732 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:20:45.363000 audit[1732]: AVC avc: denied { mac_admin } for pid=1732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:45.380141 kubelet[1732]: E0906 00:20:45.369827 1732 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.108.127:6443/api/v1/namespaces/default/events\": dial tcp 64.227.108.127:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-0d6cc4df9c.1862898d79a90a79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-0d6cc4df9c,UID:ci-3510.3.8-n-0d6cc4df9c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-0d6cc4df9c,},FirstTimestamp:2025-09-06 00:20:45.352430201 +0000 UTC m=+0.433959711,LastTimestamp:2025-09-06 00:20:45.352430201 +0000 UTC m=+0.433959711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-0d6cc4df9c,}" Sep 6 00:20:45.380756 kubelet[1732]: W0906 00:20:45.380700 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:45.380872 kubelet[1732]: E0906 00:20:45.380850 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:45.381016 kubelet[1732]: E0906 00:20:45.380995 1732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0d6cc4df9c?timeout=10s\": dial tcp 64.227.108.127:6443: connect: connection refused" interval="200ms" Sep 6 00:20:45.363000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:45.383466 kernel: audit: type=1400 audit(1757118045.363:216): avc: denied { mac_admin } for pid=1732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:45.383549 kernel: audit: type=1401 audit(1757118045.363:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:45.363000 audit[1732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a6c7e0 a1=c00089b4d0 a2=c000a2b290 a3=25 items=0 ppid=1 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:45.371000 audit[1744]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.371000 audit[1744]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc61c63ad0 a2=0 a3=7ffc61c63abc items=0 ppid=1732 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:20:45.378000 audit[1745]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.378000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee4fac7d0 a2=0 a3=7ffee4fac7bc items=0 ppid=1732 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:20:45.386531 kubelet[1732]: I0906 00:20:45.386488 1732 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:20:45.386531 kubelet[1732]: I0906 00:20:45.386524 1732 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:20:45.386734 kubelet[1732]: I0906 00:20:45.386629 1732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:20:45.393000 audit[1747]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.393000 audit[1747]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc1d0e590 a2=0 a3=7ffcc1d0e57c items=0 ppid=1732 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:20:45.398000 audit[1749]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.398000 audit[1749]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffce8a5a490 a2=0 a3=7ffce8a5a47c items=0 ppid=1732 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:20:45.415839 kubelet[1732]: I0906 00:20:45.414621 1732 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:20:45.415839 kubelet[1732]: I0906 00:20:45.414641 1732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:20:45.415839 kubelet[1732]: I0906 00:20:45.414659 1732 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:45.416228 kubelet[1732]: I0906 00:20:45.416209 1732 policy_none.go:49] "None policy: Start" Sep 6 00:20:45.417404 kubelet[1732]: I0906 00:20:45.417383 1732 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:20:45.417601 kubelet[1732]: I0906 00:20:45.417585 1732 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:20:45.419000 audit[1754]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.419000 audit[1754]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdc2a95e00 a2=0 a3=7ffdc2a95dec items=0 ppid=1732 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 6 00:20:45.420967 kubelet[1732]: I0906 00:20:45.420878 1732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:20:45.421000 audit[1755]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:45.421000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffff582f9e0 a2=0 a3=7ffff582f9cc items=0 ppid=1732 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:20:45.422690 kubelet[1732]: I0906 00:20:45.422587 1732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:20:45.422690 kubelet[1732]: I0906 00:20:45.422618 1732 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:20:45.422690 kubelet[1732]: I0906 00:20:45.422646 1732 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:20:45.422895 kubelet[1732]: E0906 00:20:45.422797 1732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:20:45.423000 audit[1756]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.423000 audit[1756]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd8e56720 a2=0 a3=7ffcd8e5670c items=0 ppid=1732 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:20:45.426000 audit[1757]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.426000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4ec56be0 a2=0 a3=7ffc4ec56bcc items=0 ppid=1732 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:20:45.428000 audit[1759]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:45.428000 audit[1759]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5c3c9ad0 a2=0 a3=7ffd5c3c9abc items=0 ppid=1732 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:20:45.429000 audit[1760]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:45.429000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb974c010 a2=0 a3=7ffeb974bffc items=0 ppid=1732 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:20:45.431000 audit[1761]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:45.431000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd8eba89a0 a2=0 a3=7ffd8eba898c items=0 ppid=1732 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:20:45.433000 audit[1762]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:45.433000 audit[1762]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd3c67a760 a2=0 a3=7ffd3c67a74c items=0 ppid=1732 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:20:45.436659 kubelet[1732]: W0906 00:20:45.436619 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:45.436812 kubelet[1732]: E0906 00:20:45.436668 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:45.439587 kubelet[1732]: I0906 00:20:45.439547 1732 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:20:45.438000 audit[1732]: AVC avc: denied { mac_admin } for pid=1732 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:45.438000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:45.438000 audit[1732]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a2bc80 a1=c000770a68 a2=c000a2bc50 a3=25 items=0 ppid=1 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:45.438000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:45.439983 kubelet[1732]: I0906 00:20:45.439631 1732 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:20:45.439983 kubelet[1732]: I0906 00:20:45.439832 1732 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:20:45.439983 kubelet[1732]: I0906 00:20:45.439846 1732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:20:45.442137 kubelet[1732]: I0906 00:20:45.442117 1732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:20:45.442802 kubelet[1732]: E0906 00:20:45.442783 1732 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-0d6cc4df9c\" not found" Sep 6 00:20:45.542195 kubelet[1732]: I0906 00:20:45.542037 1732 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.546751 kubelet[1732]: E0906 00:20:45.546689 1732 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.108.127:6443/api/v1/nodes\": dial tcp 64.227.108.127:6443: connect: connection refused" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.582222 kubelet[1732]: E0906 00:20:45.582176 1732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0d6cc4df9c?timeout=10s\": dial tcp 64.227.108.127:6443: connect: connection refused" interval="400ms" Sep 6 00:20:45.679787 kubelet[1732]: I0906 00:20:45.679692 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680097 kubelet[1732]: I0906 00:20:45.680068 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18463840cd394eb5d1c1320905d5b642-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"18463840cd394eb5d1c1320905d5b642\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680276 kubelet[1732]: I0906 00:20:45.680236 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680408 kubelet[1732]: I0906 00:20:45.680390 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680527 kubelet[1732]: I0906 00:20:45.680509 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680637 kubelet[1732]: I0906 00:20:45.680620 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680769 kubelet[1732]: I0906 00:20:45.680751 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.680899 kubelet[1732]: I0906 00:20:45.680880 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.681018 kubelet[1732]: I0906 00:20:45.681002 1732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.749076 kubelet[1732]: I0906 00:20:45.749036 1732 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.749545 kubelet[1732]: E0906 00:20:45.749468 1732 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.108.127:6443/api/v1/nodes\": dial tcp 64.227.108.127:6443: connect: connection refused" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:45.828356 kubelet[1732]: E0906 00:20:45.828183 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:45.830142 env[1286]: time="2025-09-06T00:20:45.829811814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0d6cc4df9c,Uid:18463840cd394eb5d1c1320905d5b642,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:45.831008 kubelet[1732]: E0906 00:20:45.830934 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:45.831641 env[1286]: time="2025-09-06T00:20:45.831435995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0d6cc4df9c,Uid:099666968b93f8d304471253b44d93bf,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:45.834519 kubelet[1732]: E0906 00:20:45.834447 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:45.835278 env[1286]: time="2025-09-06T00:20:45.834947274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c,Uid:076f6ffbb3acd7702ac4318cc033e76b,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:45.983450 kubelet[1732]: E0906 00:20:45.983388 1732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0d6cc4df9c?timeout=10s\": dial tcp 64.227.108.127:6443: connect: connection refused" interval="800ms" Sep 6 00:20:46.151048 kubelet[1732]: I0906 00:20:46.151012 1732 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:46.151522 kubelet[1732]: E0906 00:20:46.151486 1732 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.108.127:6443/api/v1/nodes\": dial tcp 64.227.108.127:6443: connect: connection refused" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:46.292518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476364652.mount: Deactivated successfully. Sep 6 00:20:46.297733 env[1286]: time="2025-09-06T00:20:46.297663516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.298509 env[1286]: time="2025-09-06T00:20:46.298478632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.299700 kubelet[1732]: W0906 00:20:46.299627 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0d6cc4df9c&limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:46.299824 kubelet[1732]: E0906 00:20:46.299734 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.108.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0d6cc4df9c&limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:46.301082 env[1286]: time="2025-09-06T00:20:46.301046095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.301705 env[1286]: time="2025-09-06T00:20:46.301676560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.302352 env[1286]: time="2025-09-06T00:20:46.302317742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.303046 env[1286]: time="2025-09-06T00:20:46.303017400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.305361 env[1286]: time="2025-09-06T00:20:46.305332172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.308362 kubelet[1732]: W0906 00:20:46.308299 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:46.308528 kubelet[1732]: E0906 00:20:46.308387 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.108.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:46.308899 env[1286]: time="2025-09-06T00:20:46.308868475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.310143 env[1286]: time="2025-09-06T00:20:46.310116470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.312469 env[1286]: time="2025-09-06T00:20:46.312439672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.313181 env[1286]: time="2025-09-06T00:20:46.313151694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.317919 env[1286]: time="2025-09-06T00:20:46.317889342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:46.349453 env[1286]: time="2025-09-06T00:20:46.349181875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:46.349453 env[1286]: time="2025-09-06T00:20:46.349217555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:46.349453 env[1286]: time="2025-09-06T00:20:46.349228742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:46.349453 env[1286]: time="2025-09-06T00:20:46.349352412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c39f2aa4356832ca2b1b1807a6827e5e090f1f0afc365e75592229eb0e60ec4 pid=1779 runtime=io.containerd.runc.v2 Sep 6 00:20:46.352123 env[1286]: time="2025-09-06T00:20:46.352039427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:46.352308 env[1286]: time="2025-09-06T00:20:46.352083468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:46.352308 env[1286]: time="2025-09-06T00:20:46.352111795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:46.352429 env[1286]: time="2025-09-06T00:20:46.352349288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de3a39c07cd06e13a75e56a09d7f7e14fb3afef95d67b4057f949c6a989e2591 pid=1780 runtime=io.containerd.runc.v2 Sep 6 00:20:46.359401 env[1286]: time="2025-09-06T00:20:46.359323575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:46.359608 env[1286]: time="2025-09-06T00:20:46.359578299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:46.359774 env[1286]: time="2025-09-06T00:20:46.359742386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:46.360071 env[1286]: time="2025-09-06T00:20:46.360026953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c042d92e02c24e8de78310681c949a8e605153988add2e6ce8600ccd5fea85 pid=1805 runtime=io.containerd.runc.v2 Sep 6 00:20:46.457222 env[1286]: time="2025-09-06T00:20:46.456324321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c,Uid:076f6ffbb3acd7702ac4318cc033e76b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c39f2aa4356832ca2b1b1807a6827e5e090f1f0afc365e75592229eb0e60ec4\"" Sep 6 00:20:46.458162 kubelet[1732]: W0906 00:20:46.458105 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:46.458312 kubelet[1732]: E0906 00:20:46.458174 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.108.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:46.459246 kubelet[1732]: E0906 00:20:46.459222 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:46.462905 env[1286]: time="2025-09-06T00:20:46.462860653Z" level=info msg="CreateContainer within sandbox \"9c39f2aa4356832ca2b1b1807a6827e5e090f1f0afc365e75592229eb0e60ec4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:20:46.478203 env[1286]: time="2025-09-06T00:20:46.478128921Z" level=info msg="CreateContainer within sandbox \"9c39f2aa4356832ca2b1b1807a6827e5e090f1f0afc365e75592229eb0e60ec4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7bc87afd0050f4ee85a1adcb77c2b02977488e5fa0f5a4a3b871211aa7506cd6\"" Sep 6 00:20:46.479384 env[1286]: time="2025-09-06T00:20:46.479340500Z" level=info msg="StartContainer for \"7bc87afd0050f4ee85a1adcb77c2b02977488e5fa0f5a4a3b871211aa7506cd6\"" Sep 6 00:20:46.492917 env[1286]: time="2025-09-06T00:20:46.492869096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0d6cc4df9c,Uid:099666968b93f8d304471253b44d93bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"52c042d92e02c24e8de78310681c949a8e605153988add2e6ce8600ccd5fea85\"" Sep 6 00:20:46.494037 kubelet[1732]: E0906 00:20:46.493661 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:46.494150 env[1286]: time="2025-09-06T00:20:46.493661221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0d6cc4df9c,Uid:18463840cd394eb5d1c1320905d5b642,Namespace:kube-system,Attempt:0,} returns sandbox id \"de3a39c07cd06e13a75e56a09d7f7e14fb3afef95d67b4057f949c6a989e2591\"" Sep 6 00:20:46.494621 kubelet[1732]: E0906 00:20:46.494360 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:46.495991 env[1286]: time="2025-09-06T00:20:46.495958201Z" level=info msg="CreateContainer within sandbox \"52c042d92e02c24e8de78310681c949a8e605153988add2e6ce8600ccd5fea85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:20:46.496440 env[1286]: time="2025-09-06T00:20:46.496412006Z" level=info msg="CreateContainer within sandbox \"de3a39c07cd06e13a75e56a09d7f7e14fb3afef95d67b4057f949c6a989e2591\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:20:46.508231 env[1286]: time="2025-09-06T00:20:46.508109212Z" level=info msg="CreateContainer within sandbox \"de3a39c07cd06e13a75e56a09d7f7e14fb3afef95d67b4057f949c6a989e2591\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ec9d415da140a2eb39cc1154b375361aef116a2cdb1d0a4533cb4469686aefb\"" Sep 6 00:20:46.508879 env[1286]: time="2025-09-06T00:20:46.508852109Z" level=info msg="StartContainer for \"9ec9d415da140a2eb39cc1154b375361aef116a2cdb1d0a4533cb4469686aefb\"" Sep 6 00:20:46.509497 env[1286]: time="2025-09-06T00:20:46.509451494Z" level=info msg="CreateContainer within sandbox \"52c042d92e02c24e8de78310681c949a8e605153988add2e6ce8600ccd5fea85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ecc4417dcdddd0c66969e8b2b9d88197ec49445716ee36196ec4d01a32cc77c\"" Sep 6 00:20:46.510037 env[1286]: time="2025-09-06T00:20:46.510010966Z" level=info msg="StartContainer for \"5ecc4417dcdddd0c66969e8b2b9d88197ec49445716ee36196ec4d01a32cc77c\"" Sep 6 00:20:46.644159 kubelet[1732]: W0906 00:20:46.644077 1732 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.108.127:6443: connect: connection refused Sep 6 00:20:46.644373 kubelet[1732]: E0906 00:20:46.644162 1732 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.108.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.108.127:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:20:46.657240 env[1286]: time="2025-09-06T00:20:46.657141055Z" level=info msg="StartContainer for \"9ec9d415da140a2eb39cc1154b375361aef116a2cdb1d0a4533cb4469686aefb\" returns successfully" Sep 6 00:20:46.663006 env[1286]: time="2025-09-06T00:20:46.662908085Z" level=info msg="StartContainer for \"7bc87afd0050f4ee85a1adcb77c2b02977488e5fa0f5a4a3b871211aa7506cd6\" returns successfully" Sep 6 00:20:46.678169 env[1286]: time="2025-09-06T00:20:46.678118250Z" level=info msg="StartContainer for \"5ecc4417dcdddd0c66969e8b2b9d88197ec49445716ee36196ec4d01a32cc77c\" returns successfully" Sep 6 00:20:46.784599 kubelet[1732]: E0906 00:20:46.784453 1732 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.108.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0d6cc4df9c?timeout=10s\": dial tcp 64.227.108.127:6443: connect: connection refused" interval="1.6s" Sep 6 00:20:46.953463 kubelet[1732]: I0906 00:20:46.953426 1732 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:46.953810 kubelet[1732]: E0906 00:20:46.953782 1732 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.108.127:6443/api/v1/nodes\": dial tcp 64.227.108.127:6443: connect: connection refused" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:47.451190 kubelet[1732]: E0906 00:20:47.451161 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:47.452637 kubelet[1732]: E0906 00:20:47.452607 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:47.455398 kubelet[1732]: E0906 00:20:47.455369 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:48.455877 kubelet[1732]: E0906 00:20:48.455846 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:48.532788 kubelet[1732]: E0906 00:20:48.532738 1732 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-0d6cc4df9c\" not found" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:48.555819 kubelet[1732]: I0906 00:20:48.555792 1732 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:48.565973 kubelet[1732]: I0906 00:20:48.565927 1732 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:48.566201 kubelet[1732]: E0906 00:20:48.566183 1732 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-0d6cc4df9c\": node \"ci-3510.3.8-n-0d6cc4df9c\" not found" Sep 6 00:20:49.356819 kubelet[1732]: I0906 00:20:49.356771 1732 apiserver.go:52] "Watching apiserver" Sep 6 00:20:49.379651 kubelet[1732]: I0906 00:20:49.379596 1732 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:20:50.761151 kubelet[1732]: W0906 00:20:50.761107 1732 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:20:50.761582 kubelet[1732]: E0906 00:20:50.761357 1732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:50.876670 systemd[1]: Reloading. Sep 6 00:20:50.979287 /usr/lib/systemd/system-generators/torcx-generator[2021]: time="2025-09-06T00:20:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:50.979871 /usr/lib/systemd/system-generators/torcx-generator[2021]: time="2025-09-06T00:20:50Z" level=info msg="torcx already run" Sep 6 00:20:51.111259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:51.111518 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:51.144744 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:51.257465 systemd[1]: Stopping kubelet.service... Sep 6 00:20:51.279467 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:20:51.279901 systemd[1]: Stopped kubelet.service. Sep 6 00:20:51.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:51.281099 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 6 00:20:51.281206 kernel: audit: type=1131 audit(1757118051.279:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:51.289870 systemd[1]: Starting kubelet.service... Sep 6 00:20:52.329837 kernel: audit: type=1130 audit(1757118052.323:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:52.323904 systemd[1]: Started kubelet.service. Sep 6 00:20:52.450161 kubelet[2083]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:52.450666 kubelet[2083]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:20:52.450760 kubelet[2083]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:20:52.461651 kubelet[2083]: I0906 00:20:52.461567 2083 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:20:52.469516 kubelet[2083]: I0906 00:20:52.469474 2083 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:20:52.469708 kubelet[2083]: I0906 00:20:52.469694 2083 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:20:52.470558 kubelet[2083]: I0906 00:20:52.470527 2083 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:20:52.479021 kubelet[2083]: I0906 00:20:52.478974 2083 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:20:52.482951 kubelet[2083]: I0906 00:20:52.482918 2083 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:20:52.500959 kubelet[2083]: E0906 00:20:52.500900 2083 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:20:52.501177 kubelet[2083]: I0906 00:20:52.501160 2083 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:20:52.504900 kubelet[2083]: I0906 00:20:52.504862 2083 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:20:52.505680 kubelet[2083]: I0906 00:20:52.505655 2083 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:20:52.506087 kubelet[2083]: I0906 00:20:52.506048 2083 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:20:52.506361 kubelet[2083]: I0906 00:20:52.506182 2083 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0d6cc4df9c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:20:52.506515 kubelet[2083]: I0906 00:20:52.506502 2083 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:20:52.506591 kubelet[2083]: I0906 00:20:52.506581 2083 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:20:52.506685 kubelet[2083]: I0906 00:20:52.506675 2083 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:52.506896 kubelet[2083]: I0906 00:20:52.506884 2083 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:20:52.506989 kubelet[2083]: I0906 00:20:52.506977 2083 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:20:52.507077 kubelet[2083]: I0906 00:20:52.507066 2083 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:20:52.507143 kubelet[2083]: I0906 00:20:52.507133 2083 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:20:52.521864 kubelet[2083]: I0906 00:20:52.521307 2083 apiserver.go:52] "Watching apiserver" Sep 6 00:20:52.530323 kubelet[2083]: I0906 00:20:52.529623 2083 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:20:52.531740 kubelet[2083]: I0906 00:20:52.531698 2083 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:20:52.542087 kubelet[2083]: I0906 00:20:52.541022 2083 server.go:1274] "Started kubelet" Sep 6 00:20:52.545146 kubelet[2083]: I0906 00:20:52.543986 2083 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:20:52.545624 kubelet[2083]: I0906 00:20:52.545607 2083 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:20:52.552784 kernel: audit: type=1400 audit(1757118052.545:232): avc: denied { mac_admin } for pid=2083 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:52.552906 kernel: audit: type=1401 audit(1757118052.545:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:52.552933 kernel: audit: type=1300 audit(1757118052.545:232): arch=c000003e syscall=188 success=no exit=-22 a0=c000b74690 a1=c000bae3a8 a2=c000b74660 a3=25 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.545000 audit[2083]: AVC avc: denied { mac_admin } for pid=2083 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:52.545000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:52.545000 audit[2083]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b74690 a1=c000bae3a8 a2=c000b74660 a3=25 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.553151 kubelet[2083]: I0906 00:20:52.546105 2083 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:20:52.553151 kubelet[2083]: I0906 00:20:52.546858 2083 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:20:52.553151 kubelet[2083]: I0906 00:20:52.546888 2083 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:20:52.553151 kubelet[2083]: E0906 00:20:52.551653 2083 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:20:52.553151 kubelet[2083]: I0906 00:20:52.546202 2083 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:20:52.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:52.556502 kernel: audit: type=1327 audit(1757118052.545:232): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:52.546000 audit[2083]: AVC avc: denied { mac_admin } for pid=2083 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:52.559800 kernel: audit: type=1400 audit(1757118052.546:233): avc: denied { mac_admin } for pid=2083 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:52.546000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:52.561733 kernel: audit: type=1401 audit(1757118052.546:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:52.546000 audit[2083]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000754ac0 a1=c0005ed650 a2=c000665b30 a3=25 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.565875 kernel: audit: type=1300 audit(1757118052.546:233): arch=c000003e syscall=188 success=no exit=-22 a0=c000754ac0 a1=c0005ed650 a2=c000665b30 a3=25 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.568316 kubelet[2083]: I0906 00:20:52.568286 2083 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:20:52.572033 kubelet[2083]: I0906 00:20:52.571978 2083 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:20:52.572416 kubelet[2083]: I0906 00:20:52.572396 2083 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:20:52.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:52.576762 kernel: audit: type=1327 audit(1757118052.546:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:52.578854 kubelet[2083]: I0906 00:20:52.578818 2083 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:20:52.583152 kubelet[2083]: I0906 00:20:52.583038 2083 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:20:52.591843 kubelet[2083]: I0906 00:20:52.591815 2083 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:20:52.592173 kubelet[2083]: I0906 00:20:52.592145 2083 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:20:52.605638 kubelet[2083]: I0906 00:20:52.605595 2083 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:20:52.606549 kubelet[2083]: I0906 00:20:52.606520 2083 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:20:52.608694 kubelet[2083]: I0906 00:20:52.608664 2083 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:20:52.608905 kubelet[2083]: I0906 00:20:52.608885 2083 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:20:52.609023 kubelet[2083]: I0906 00:20:52.609010 2083 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:20:52.609153 kubelet[2083]: E0906 00:20:52.609134 2083 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:20:52.710383 kubelet[2083]: E0906 00:20:52.710351 2083 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:20:52.716888 kubelet[2083]: I0906 00:20:52.716853 2083 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:20:52.717143 kubelet[2083]: I0906 00:20:52.717125 2083 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:20:52.717228 kubelet[2083]: I0906 00:20:52.717217 2083 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:20:52.718855 kubelet[2083]: I0906 00:20:52.718828 2083 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:20:52.719150 kubelet[2083]: I0906 00:20:52.719096 2083 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:20:52.719249 kubelet[2083]: I0906 00:20:52.719238 2083 policy_none.go:49] "None policy: Start" Sep 6 00:20:52.725514 kubelet[2083]: I0906 00:20:52.725469 2083 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:20:52.725830 kubelet[2083]: I0906 00:20:52.725797 2083 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:20:52.726245 kubelet[2083]: I0906 00:20:52.726230 2083 state_mem.go:75] "Updated machine memory state" Sep 6 00:20:52.730771 kubelet[2083]: I0906 00:20:52.730743 2083 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:20:52.732289 kubelet[2083]: I0906 00:20:52.731828 2083 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:20:52.732289 kubelet[2083]: I0906 00:20:52.732042 2083 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:20:52.732289 kubelet[2083]: I0906 00:20:52.732055 2083 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:20:52.731000 audit[2083]: AVC avc: denied { mac_admin } for pid=2083 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:20:52.731000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:20:52.731000 audit[2083]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00120d590 a1=c0011ed668 a2=c00120d560 a3=25 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:52.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:20:52.737128 kubelet[2083]: I0906 00:20:52.734117 2083 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:20:52.857181 kubelet[2083]: I0906 00:20:52.854960 2083 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.864596 kubelet[2083]: I0906 00:20:52.864175 2083 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.864596 kubelet[2083]: I0906 00:20:52.864284 2083 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.924447 kubelet[2083]: W0906 00:20:52.924390 2083 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:20:52.927839 kubelet[2083]: W0906 00:20:52.927675 2083 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:20:52.956483 kubelet[2083]: I0906 00:20:52.955938 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0d6cc4df9c" podStartSLOduration=2.95591329 podStartE2EDuration="2.95591329s" podCreationTimestamp="2025-09-06 00:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:52.945641966 +0000 UTC m=+0.594878168" watchObservedRunningTime="2025-09-06 00:20:52.95591329 +0000 UTC m=+0.605149490" Sep 6 00:20:52.969742 kubelet[2083]: I0906 00:20:52.969650 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" podStartSLOduration=0.969589476 podStartE2EDuration="969.589476ms" podCreationTimestamp="2025-09-06 00:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:52.956755344 +0000 UTC m=+0.605991545" watchObservedRunningTime="2025-09-06 00:20:52.969589476 +0000 UTC m=+0.618825668" Sep 6 00:20:52.973161 kubelet[2083]: I0906 00:20:52.973122 2083 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:20:52.979189 kubelet[2083]: I0906 00:20:52.978812 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979189 kubelet[2083]: I0906 00:20:52.978848 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979189 kubelet[2083]: I0906 00:20:52.978878 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/099666968b93f8d304471253b44d93bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"099666968b93f8d304471253b44d93bf\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979189 kubelet[2083]: I0906 00:20:52.978903 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18463840cd394eb5d1c1320905d5b642-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"18463840cd394eb5d1c1320905d5b642\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979189 kubelet[2083]: I0906 00:20:52.978930 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979478 kubelet[2083]: I0906 00:20:52.978965 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979478 kubelet[2083]: I0906 00:20:52.978981 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979478 kubelet[2083]: I0906 00:20:52.978996 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:52.979478 kubelet[2083]: I0906 00:20:52.979015 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/076f6ffbb3acd7702ac4318cc033e76b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c\" (UID: \"076f6ffbb3acd7702ac4318cc033e76b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:20:53.214936 kubelet[2083]: E0906 00:20:53.214888 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:53.227708 kubelet[2083]: E0906 00:20:53.227650 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:53.228158 kubelet[2083]: E0906 00:20:53.228132 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:53.489851 kubelet[2083]: I0906 00:20:53.489654 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0d6cc4df9c" podStartSLOduration=1.489629919 podStartE2EDuration="1.489629919s" podCreationTimestamp="2025-09-06 00:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:52.970584447 +0000 UTC m=+0.619820649" watchObservedRunningTime="2025-09-06 00:20:53.489629919 +0000 UTC m=+1.138866122" Sep 6 00:20:53.671803 kubelet[2083]: E0906 00:20:53.671701 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:53.672408 kubelet[2083]: E0906 00:20:53.672373 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:54.673943 kubelet[2083]: E0906 00:20:54.673879 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:56.431361 kubelet[2083]: E0906 00:20:56.431322 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:56.678157 kubelet[2083]: E0906 00:20:56.678122 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:57.370380 kubelet[2083]: I0906 00:20:57.370336 2083 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:20:57.370945 env[1286]: time="2025-09-06T00:20:57.370909978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:20:57.371791 kubelet[2083]: I0906 00:20:57.371767 2083 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:20:58.412859 kubelet[2083]: I0906 00:20:58.412806 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tgvz\" (UniqueName: \"kubernetes.io/projected/2ea5a76e-38a0-4497-9451-5b8199b49832-kube-api-access-4tgvz\") pod \"kube-proxy-vr86h\" (UID: \"2ea5a76e-38a0-4497-9451-5b8199b49832\") " pod="kube-system/kube-proxy-vr86h" Sep 6 00:20:58.413357 kubelet[2083]: I0906 00:20:58.412900 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ea5a76e-38a0-4497-9451-5b8199b49832-kube-proxy\") pod \"kube-proxy-vr86h\" (UID: \"2ea5a76e-38a0-4497-9451-5b8199b49832\") " pod="kube-system/kube-proxy-vr86h" Sep 6 00:20:58.413357 kubelet[2083]: I0906 00:20:58.412941 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ea5a76e-38a0-4497-9451-5b8199b49832-xtables-lock\") pod \"kube-proxy-vr86h\" (UID: \"2ea5a76e-38a0-4497-9451-5b8199b49832\") " pod="kube-system/kube-proxy-vr86h" Sep 6 00:20:58.413357 kubelet[2083]: I0906 00:20:58.412961 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ea5a76e-38a0-4497-9451-5b8199b49832-lib-modules\") pod \"kube-proxy-vr86h\" (UID: \"2ea5a76e-38a0-4497-9451-5b8199b49832\") " pod="kube-system/kube-proxy-vr86h" Sep 6 00:20:58.513693 kubelet[2083]: I0906 00:20:58.513634 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8582bee-8352-4384-a2ef-c781250d0f6b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-ml7bx\" (UID: \"e8582bee-8352-4384-a2ef-c781250d0f6b\") " pod="tigera-operator/tigera-operator-58fc44c59b-ml7bx" Sep 6 00:20:58.513693 kubelet[2083]: I0906 00:20:58.513734 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vskp\" (UniqueName: \"kubernetes.io/projected/e8582bee-8352-4384-a2ef-c781250d0f6b-kube-api-access-9vskp\") pod \"tigera-operator-58fc44c59b-ml7bx\" (UID: \"e8582bee-8352-4384-a2ef-c781250d0f6b\") " pod="tigera-operator/tigera-operator-58fc44c59b-ml7bx" Sep 6 00:20:58.522273 kubelet[2083]: I0906 00:20:58.522223 2083 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:20:58.599444 kubelet[2083]: E0906 00:20:58.599404 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:58.635562 kubelet[2083]: E0906 00:20:58.635520 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:58.637089 env[1286]: time="2025-09-06T00:20:58.637011405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vr86h,Uid:2ea5a76e-38a0-4497-9451-5b8199b49832,Namespace:kube-system,Attempt:0,}" Sep 6 00:20:58.656041 env[1286]: time="2025-09-06T00:20:58.655942836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:58.656041 env[1286]: time="2025-09-06T00:20:58.656039025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:58.656315 env[1286]: time="2025-09-06T00:20:58.656061907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:58.656315 env[1286]: time="2025-09-06T00:20:58.656222014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e03a78086339748ad4efb8709612699fae49017d30e5cb8f98999cba49a8f561 pid=2133 runtime=io.containerd.runc.v2 Sep 6 00:20:58.683198 kubelet[2083]: E0906 00:20:58.681296 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:58.736472 env[1286]: time="2025-09-06T00:20:58.736426105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vr86h,Uid:2ea5a76e-38a0-4497-9451-5b8199b49832,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03a78086339748ad4efb8709612699fae49017d30e5cb8f98999cba49a8f561\"" Sep 6 00:20:58.737648 kubelet[2083]: E0906 00:20:58.737619 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:58.742772 env[1286]: time="2025-09-06T00:20:58.742255690Z" level=info msg="CreateContainer within sandbox \"e03a78086339748ad4efb8709612699fae49017d30e5cb8f98999cba49a8f561\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:20:58.754450 env[1286]: time="2025-09-06T00:20:58.754395180Z" level=info msg="CreateContainer within sandbox \"e03a78086339748ad4efb8709612699fae49017d30e5cb8f98999cba49a8f561\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b30a1088563f6ebdcadd8bcf1c14e96a08bb74b05a0dbe32bfc0549ac7452489\"" Sep 6 00:20:58.756623 env[1286]: time="2025-09-06T00:20:58.755306243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ml7bx,Uid:e8582bee-8352-4384-a2ef-c781250d0f6b,Namespace:tigera-operator,Attempt:0,}" Sep 6 00:20:58.756623 env[1286]: time="2025-09-06T00:20:58.755655786Z" level=info msg="StartContainer for \"b30a1088563f6ebdcadd8bcf1c14e96a08bb74b05a0dbe32bfc0549ac7452489\"" Sep 6 00:20:58.791401 env[1286]: time="2025-09-06T00:20:58.784436349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:20:58.791401 env[1286]: time="2025-09-06T00:20:58.784536297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:20:58.791401 env[1286]: time="2025-09-06T00:20:58.784581155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:20:58.791401 env[1286]: time="2025-09-06T00:20:58.784787323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/404c55c9d9182684fd2832f0f94283315010d7ae6f0352b0d33fb6c6564908cb pid=2183 runtime=io.containerd.runc.v2 Sep 6 00:20:58.851237 env[1286]: time="2025-09-06T00:20:58.850523257Z" level=info msg="StartContainer for \"b30a1088563f6ebdcadd8bcf1c14e96a08bb74b05a0dbe32bfc0549ac7452489\" returns successfully" Sep 6 00:20:58.865823 env[1286]: time="2025-09-06T00:20:58.865773057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ml7bx,Uid:e8582bee-8352-4384-a2ef-c781250d0f6b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"404c55c9d9182684fd2832f0f94283315010d7ae6f0352b0d33fb6c6564908cb\"" Sep 6 00:20:58.869429 env[1286]: time="2025-09-06T00:20:58.867699427Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 00:20:59.015000 audit[2277]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.019073 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 6 00:20:59.019198 kernel: audit: type=1325 audit(1757118059.015:235): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.019248 kernel: audit: type=1300 audit(1757118059.015:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccb52e9d0 a2=0 a3=7ffccb52e9bc items=0 ppid=2208 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.015000 audit[2277]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccb52e9d0 a2=0 a3=7ffccb52e9bc items=0 ppid=2208 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:20:59.024160 kernel: audit: type=1327 audit(1757118059.015:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:20:59.018000 audit[2279]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.026223 kernel: audit: type=1325 audit(1757118059.018:236): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.018000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9d8224a0 a2=0 a3=7fff9d82248c items=0 ppid=2208 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.029634 kernel: audit: type=1300 audit(1757118059.018:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9d8224a0 a2=0 a3=7fff9d82248c items=0 ppid=2208 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:20:59.032744 kernel: audit: type=1327 audit(1757118059.018:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:20:59.019000 audit[2280]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.034782 kernel: audit: type=1325 audit(1757118059.019:237): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.019000 audit[2280]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a03eeb0 a2=0 a3=7fff5a03ee9c items=0 ppid=2208 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:20:59.040198 kernel: audit: type=1300 audit(1757118059.019:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a03eeb0 a2=0 a3=7fff5a03ee9c items=0 ppid=2208 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.040255 kernel: audit: type=1327 audit(1757118059.019:237): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:20:59.031000 audit[2278]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.041945 kernel: audit: type=1325 audit(1757118059.031:238): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.031000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf6bc6210 a2=0 a3=7ffcf6bc61fc items=0 ppid=2208 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:20:59.033000 audit[2281]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.033000 audit[2281]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc99c8a20 a2=0 a3=7fffc99c8a0c items=0 ppid=2208 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:20:59.036000 audit[2282]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.036000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3b771170 a2=0 a3=7ffe3b77115c items=0 ppid=2208 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.036000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:20:59.124000 audit[2283]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.124000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe49e87480 a2=0 a3=7ffe49e8746c items=0 ppid=2208 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:20:59.130000 audit[2285]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.130000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffca55cce50 a2=0 a3=7ffca55cce3c items=0 ppid=2208 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 6 00:20:59.134000 audit[2288]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.134000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd3c6463a0 a2=0 a3=7ffd3c64638c items=0 ppid=2208 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.134000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 6 00:20:59.136000 audit[2289]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.136000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8c5222c0 a2=0 a3=7ffe8c5222ac items=0 ppid=2208 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:20:59.139000 audit[2291]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.139000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc68944a0 a2=0 a3=7ffdc689448c items=0 ppid=2208 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:20:59.140000 audit[2292]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.140000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1492dd20 a2=0 a3=7ffd1492dd0c items=0 ppid=2208 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:20:59.143000 audit[2294]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.143000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb58714e0 a2=0 a3=7ffeb58714cc items=0 ppid=2208 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:20:59.148000 audit[2297]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.148000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3ecad2b0 a2=0 a3=7ffc3ecad29c items=0 ppid=2208 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 6 00:20:59.149000 audit[2298]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.149000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3a595b50 a2=0 a3=7ffc3a595b3c items=0 ppid=2208 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:20:59.153000 audit[2300]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.153000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff14f37180 a2=0 a3=7fff14f3716c items=0 ppid=2208 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:20:59.154000 audit[2301]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.154000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff147fa790 a2=0 a3=7fff147fa77c items=0 ppid=2208 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:20:59.158000 audit[2303]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.158000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6264a9f0 a2=0 a3=7ffd6264a9dc items=0 ppid=2208 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:20:59.162000 audit[2306]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.162000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe01d9d2e0 a2=0 a3=7ffe01d9d2cc items=0 ppid=2208 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:20:59.167000 audit[2309]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.167000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2b500000 a2=0 a3=7fff2b4fffec items=0 ppid=2208 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:20:59.169000 audit[2310]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.169000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee14e1c20 a2=0 a3=7ffee14e1c0c items=0 ppid=2208 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:20:59.172000 audit[2312]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.172000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff00e63e50 a2=0 a3=7fff00e63e3c items=0 ppid=2208 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:20:59.176000 audit[2315]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.176000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4926e240 a2=0 a3=7ffc4926e22c items=0 ppid=2208 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:20:59.178000 audit[2316]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.178000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc56b1b2b0 a2=0 a3=7ffc56b1b29c items=0 ppid=2208 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:20:59.181000 audit[2318]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:20:59.181000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc6d7ee8c0 a2=0 a3=7ffc6d7ee8ac items=0 ppid=2208 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:20:59.213000 audit[2324]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:20:59.213000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd2312d0f0 a2=0 a3=7ffd2312d0dc items=0 ppid=2208 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:20:59.223000 audit[2324]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:20:59.223000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd2312d0f0 a2=0 a3=7ffd2312d0dc items=0 ppid=2208 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:20:59.226000 audit[2329]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.226000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdc5bf390 a2=0 a3=7fffdc5bf37c items=0 ppid=2208 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:20:59.230000 audit[2331]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.230000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd701867c0 a2=0 a3=7ffd701867ac items=0 ppid=2208 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 6 00:20:59.235000 audit[2334]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.235000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc96ddd2e0 a2=0 a3=7ffc96ddd2cc items=0 ppid=2208 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 6 00:20:59.237000 audit[2335]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.237000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbf95b9b0 a2=0 a3=7fffbf95b99c items=0 ppid=2208 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:20:59.240000 audit[2337]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.240000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff17366480 a2=0 a3=7fff1736646c items=0 ppid=2208 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:20:59.241000 audit[2338]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.241000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff3cbe7c0 a2=0 a3=7ffff3cbe7ac items=0 ppid=2208 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:20:59.244000 audit[2340]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.244000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffb7d8880 a2=0 a3=7ffffb7d886c items=0 ppid=2208 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 6 00:20:59.248000 audit[2343]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.248000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd6b805040 a2=0 a3=7ffd6b80502c items=0 ppid=2208 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:20:59.250000 audit[2344]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2344 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.250000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd72691070 a2=0 a3=7ffd7269105c items=0 ppid=2208 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:20:59.253000 audit[2346]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.253000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7d324ac0 a2=0 a3=7ffe7d324aac items=0 ppid=2208 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:20:59.254000 audit[2347]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.254000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbd12fec0 a2=0 a3=7ffdbd12feac items=0 ppid=2208 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:20:59.257000 audit[2349]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.257000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6a09c4b0 a2=0 a3=7ffd6a09c49c items=0 ppid=2208 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:20:59.262000 audit[2352]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.262000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0ad98bd0 a2=0 a3=7ffd0ad98bbc items=0 ppid=2208 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:20:59.267000 audit[2355]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.267000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd47145f30 a2=0 a3=7ffd47145f1c items=0 ppid=2208 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 6 00:20:59.269000 audit[2356]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.269000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd27d7afd0 a2=0 a3=7ffd27d7afbc items=0 ppid=2208 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:20:59.274000 audit[2358]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.274000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff2a6ae860 a2=0 a3=7fff2a6ae84c items=0 ppid=2208 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:20:59.278000 audit[2361]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.278000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffee01a8440 a2=0 a3=7ffee01a842c items=0 ppid=2208 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.278000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:20:59.280000 audit[2362]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.280000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff15c80f00 a2=0 a3=7fff15c80eec items=0 ppid=2208 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:20:59.283000 audit[2364]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.283000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc9bbf550 a2=0 a3=7ffcc9bbf53c items=0 ppid=2208 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:20:59.285000 audit[2365]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.285000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe64114ef0 a2=0 a3=7ffe64114edc items=0 ppid=2208 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:20:59.289000 audit[2367]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.289000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0891cb30 a2=0 a3=7ffc0891cb1c items=0 ppid=2208 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:20:59.294000 audit[2370]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:20:59.294000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff33cd2090 a2=0 a3=7fff33cd207c items=0 ppid=2208 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:20:59.298000 audit[2372]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:20:59.298000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffca6d9b0d0 a2=0 a3=7ffca6d9b0bc items=0 ppid=2208 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.298000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:20:59.298000 audit[2372]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:20:59.298000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffca6d9b0d0 a2=0 a3=7ffca6d9b0bc items=0 ppid=2208 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:59.298000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:20:59.684308 kubelet[2083]: E0906 00:20:59.684270 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:20:59.697383 kubelet[2083]: I0906 00:20:59.697174 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vr86h" podStartSLOduration=1.697152264 podStartE2EDuration="1.697152264s" podCreationTimestamp="2025-09-06 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:20:59.697049301 +0000 UTC m=+7.346285501" watchObservedRunningTime="2025-09-06 00:20:59.697152264 +0000 UTC m=+7.346388463" Sep 6 00:21:00.194675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787331185.mount: Deactivated successfully. Sep 6 00:21:01.571929 env[1286]: time="2025-09-06T00:21:01.571856626Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:01.573864 env[1286]: time="2025-09-06T00:21:01.573819299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:01.576551 env[1286]: time="2025-09-06T00:21:01.576495155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:01.578752 env[1286]: time="2025-09-06T00:21:01.578670380Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:01.579021 env[1286]: time="2025-09-06T00:21:01.578984473Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 6 00:21:01.584251 env[1286]: time="2025-09-06T00:21:01.584194677Z" level=info msg="CreateContainer within sandbox \"404c55c9d9182684fd2832f0f94283315010d7ae6f0352b0d33fb6c6564908cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 00:21:01.598132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046194414.mount: Deactivated successfully. Sep 6 00:21:01.607325 env[1286]: time="2025-09-06T00:21:01.607262253Z" level=info msg="CreateContainer within sandbox \"404c55c9d9182684fd2832f0f94283315010d7ae6f0352b0d33fb6c6564908cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8be7c4d4a68ff3b9c916d0da3473724a102e46030058157286c7ae0e4a20d62b\"" Sep 6 00:21:01.610083 env[1286]: time="2025-09-06T00:21:01.608401579Z" level=info msg="StartContainer for \"8be7c4d4a68ff3b9c916d0da3473724a102e46030058157286c7ae0e4a20d62b\"" Sep 6 00:21:01.707760 env[1286]: time="2025-09-06T00:21:01.707692579Z" level=info msg="StartContainer for \"8be7c4d4a68ff3b9c916d0da3473724a102e46030058157286c7ae0e4a20d62b\" returns successfully" Sep 6 00:21:01.876578 kubelet[2083]: E0906 00:21:01.876537 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:02.594486 systemd[1]: run-containerd-runc-k8s.io-8be7c4d4a68ff3b9c916d0da3473724a102e46030058157286c7ae0e4a20d62b-runc.sg4RCx.mount: Deactivated successfully. Sep 6 00:21:02.735395 kubelet[2083]: I0906 00:21:02.735268 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-ml7bx" podStartSLOduration=2.0215839349999998 podStartE2EDuration="4.735235353s" podCreationTimestamp="2025-09-06 00:20:58 +0000 UTC" firstStartedPulling="2025-09-06 00:20:58.867017014 +0000 UTC m=+6.516253193" lastFinishedPulling="2025-09-06 00:21:01.580668429 +0000 UTC m=+9.229904611" observedRunningTime="2025-09-06 00:21:02.734110742 +0000 UTC m=+10.383346941" watchObservedRunningTime="2025-09-06 00:21:02.735235353 +0000 UTC m=+10.384471555" Sep 6 00:21:08.489000 audit[1460]: USER_END pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.489878 sudo[1460]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:08.490993 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 00:21:08.491038 kernel: audit: type=1106 audit(1757118068.489:286): pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.489000 audit[1460]: CRED_DISP pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.497776 kernel: audit: type=1104 audit(1757118068.489:287): pid=1460 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.498592 sshd[1454]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:08.506466 kernel: audit: type=1106 audit(1757118068.499:288): pid=1454 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:08.499000 audit[1454]: USER_END pid=1454 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:08.505157 systemd[1]: sshd@6-64.227.108.127:22-147.75.109.163:55246.service: Deactivated successfully. Sep 6 00:21:08.506095 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:21:08.499000 audit[1454]: CRED_DISP pid=1454 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:08.512106 kernel: audit: type=1104 audit(1757118068.499:289): pid=1454 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:08.508252 systemd-logind[1277]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:21:08.513392 systemd-logind[1277]: Removed session 7. Sep 6 00:21:08.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-64.227.108.127:22-147.75.109.163:55246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:08.517798 kernel: audit: type=1131 audit(1757118068.504:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-64.227.108.127:22-147.75.109.163:55246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:09.427000 audit[2454]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.435370 kernel: audit: type=1325 audit(1757118069.427:291): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.435482 kernel: audit: type=1300 audit(1757118069.427:291): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd7f5a6500 a2=0 a3=7ffd7f5a64ec items=0 ppid=2208 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.427000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd7f5a6500 a2=0 a3=7ffd7f5a64ec items=0 ppid=2208 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:09.439735 kernel: audit: type=1327 audit(1757118069.427:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:09.439000 audit[2454]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.442730 kernel: audit: type=1325 audit(1757118069.439:292): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.439000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7f5a6500 a2=0 a3=0 items=0 ppid=2208 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.456752 kernel: audit: type=1300 audit(1757118069.439:292): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7f5a6500 a2=0 a3=0 items=0 ppid=2208 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:09.470000 audit[2456]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.470000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd7ae431a0 a2=0 a3=7ffd7ae4318c items=0 ppid=2208 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:09.477000 audit[2456]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:09.477000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7ae431a0 a2=0 a3=0 items=0 ppid=2208 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:09.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:09.889167 update_engine[1279]: I0906 00:21:09.888773 1279 update_attempter.cc:509] Updating boot flags... Sep 6 00:21:11.880982 kubelet[2083]: E0906 00:21:11.880948 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:11.895000 audit[2473]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:11.895000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc2e7fff60 a2=0 a3=7ffc2e7fff4c items=0 ppid=2208 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:11.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:11.900000 audit[2473]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:11.900000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2e7fff60 a2=0 a3=0 items=0 ppid=2208 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:11.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:11.922000 audit[2475]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:11.922000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc303ab210 a2=0 a3=7ffc303ab1fc items=0 ppid=2208 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:11.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:11.931000 audit[2475]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:11.931000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc303ab210 a2=0 a3=0 items=0 ppid=2208 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:11.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:12.307664 kubelet[2083]: I0906 00:21:12.307489 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9419174-5292-455c-8591-74dd531a4e19-tigera-ca-bundle\") pod \"calico-typha-788d77b55d-t6txf\" (UID: \"a9419174-5292-455c-8591-74dd531a4e19\") " pod="calico-system/calico-typha-788d77b55d-t6txf" Sep 6 00:21:12.307664 kubelet[2083]: I0906 00:21:12.307595 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbwz\" (UniqueName: \"kubernetes.io/projected/a9419174-5292-455c-8591-74dd531a4e19-kube-api-access-mrbwz\") pod \"calico-typha-788d77b55d-t6txf\" (UID: \"a9419174-5292-455c-8591-74dd531a4e19\") " pod="calico-system/calico-typha-788d77b55d-t6txf" Sep 6 00:21:12.307664 kubelet[2083]: I0906 00:21:12.307623 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a9419174-5292-455c-8591-74dd531a4e19-typha-certs\") pod \"calico-typha-788d77b55d-t6txf\" (UID: \"a9419174-5292-455c-8591-74dd531a4e19\") " pod="calico-system/calico-typha-788d77b55d-t6txf" Sep 6 00:21:12.487088 kubelet[2083]: E0906 00:21:12.486778 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:12.488011 env[1286]: time="2025-09-06T00:21:12.487967342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788d77b55d-t6txf,Uid:a9419174-5292-455c-8591-74dd531a4e19,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:12.514684 env[1286]: time="2025-09-06T00:21:12.514602357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:12.514926 env[1286]: time="2025-09-06T00:21:12.514689980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:12.514926 env[1286]: time="2025-09-06T00:21:12.514722966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:12.516583 env[1286]: time="2025-09-06T00:21:12.515347273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da8707a79dbbb0f623c24d075957cffaa96c8e95a94cf94d869c427691d57da4 pid=2485 runtime=io.containerd.runc.v2 Sep 6 00:21:12.611166 kubelet[2083]: I0906 00:21:12.611096 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-cni-bin-dir\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611480 kubelet[2083]: I0906 00:21:12.611168 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7ed99db-e99d-480f-b129-6491240fffb0-tigera-ca-bundle\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611480 kubelet[2083]: I0906 00:21:12.611216 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-var-lib-calico\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611480 kubelet[2083]: I0906 00:21:12.611234 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-cni-log-dir\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611480 kubelet[2083]: I0906 00:21:12.611275 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-policysync\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611480 kubelet[2083]: I0906 00:21:12.611291 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-xtables-lock\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611776 kubelet[2083]: I0906 00:21:12.611364 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lh5\" (UniqueName: \"kubernetes.io/projected/f7ed99db-e99d-480f-b129-6491240fffb0-kube-api-access-z5lh5\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611776 kubelet[2083]: I0906 00:21:12.611383 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-lib-modules\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611776 kubelet[2083]: I0906 00:21:12.611431 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f7ed99db-e99d-480f-b129-6491240fffb0-node-certs\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611776 kubelet[2083]: I0906 00:21:12.611450 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-flexvol-driver-host\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.611776 kubelet[2083]: I0906 00:21:12.611464 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-cni-net-dir\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.612054 kubelet[2083]: I0906 00:21:12.611531 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f7ed99db-e99d-480f-b129-6491240fffb0-var-run-calico\") pod \"calico-node-mztrq\" (UID: \"f7ed99db-e99d-480f-b129-6491240fffb0\") " pod="calico-system/calico-node-mztrq" Sep 6 00:21:12.632788 env[1286]: time="2025-09-06T00:21:12.632736301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788d77b55d-t6txf,Uid:a9419174-5292-455c-8591-74dd531a4e19,Namespace:calico-system,Attempt:0,} returns sandbox id \"da8707a79dbbb0f623c24d075957cffaa96c8e95a94cf94d869c427691d57da4\"" Sep 6 00:21:12.635870 kubelet[2083]: E0906 00:21:12.635834 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:12.638398 env[1286]: time="2025-09-06T00:21:12.638356583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 00:21:12.721168 kubelet[2083]: E0906 00:21:12.721130 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.721168 kubelet[2083]: W0906 00:21:12.721155 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.721437 kubelet[2083]: E0906 00:21:12.721199 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.728553 kubelet[2083]: E0906 00:21:12.728516 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.728814 kubelet[2083]: W0906 00:21:12.728787 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.728978 kubelet[2083]: E0906 00:21:12.728957 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.762460 kubelet[2083]: E0906 00:21:12.762396 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:12.775056 env[1286]: time="2025-09-06T00:21:12.774993527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mztrq,Uid:f7ed99db-e99d-480f-b129-6491240fffb0,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:12.821456 kubelet[2083]: E0906 00:21:12.821406 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.821456 kubelet[2083]: W0906 00:21:12.821446 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.821725 kubelet[2083]: E0906 00:21:12.821480 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.821865 kubelet[2083]: E0906 00:21:12.821838 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.821865 kubelet[2083]: W0906 00:21:12.821864 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.822013 kubelet[2083]: E0906 00:21:12.821888 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.822141 kubelet[2083]: E0906 00:21:12.822120 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.822141 kubelet[2083]: W0906 00:21:12.822139 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.822273 kubelet[2083]: E0906 00:21:12.822154 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.822417 kubelet[2083]: E0906 00:21:12.822395 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.822417 kubelet[2083]: W0906 00:21:12.822415 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.822541 kubelet[2083]: E0906 00:21:12.822431 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.822707 kubelet[2083]: E0906 00:21:12.822684 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.822707 kubelet[2083]: W0906 00:21:12.822704 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.824117 kubelet[2083]: E0906 00:21:12.824067 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.825337 kubelet[2083]: E0906 00:21:12.824398 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.825337 kubelet[2083]: W0906 00:21:12.824421 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.825337 kubelet[2083]: E0906 00:21:12.824441 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.825337 kubelet[2083]: E0906 00:21:12.824693 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.826526 kubelet[2083]: W0906 00:21:12.824708 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.826526 kubelet[2083]: E0906 00:21:12.825832 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.826526 kubelet[2083]: E0906 00:21:12.826110 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.826526 kubelet[2083]: W0906 00:21:12.826125 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.826526 kubelet[2083]: E0906 00:21:12.826144 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.826526 kubelet[2083]: E0906 00:21:12.826397 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.826526 kubelet[2083]: W0906 00:21:12.826411 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.826526 kubelet[2083]: E0906 00:21:12.826427 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.826991 kubelet[2083]: E0906 00:21:12.826649 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.826991 kubelet[2083]: W0906 00:21:12.826659 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.826991 kubelet[2083]: E0906 00:21:12.826670 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.826991 kubelet[2083]: E0906 00:21:12.826969 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.826991 kubelet[2083]: W0906 00:21:12.826983 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.827211 kubelet[2083]: E0906 00:21:12.826998 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.827286 kubelet[2083]: E0906 00:21:12.827258 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.827286 kubelet[2083]: W0906 00:21:12.827279 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.827410 kubelet[2083]: E0906 00:21:12.827295 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.827547 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.831762 kubelet[2083]: W0906 00:21:12.827567 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.827583 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.829906 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.831762 kubelet[2083]: W0906 00:21:12.829923 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.829947 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.830190 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.831762 kubelet[2083]: W0906 00:21:12.830205 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.830221 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.831762 kubelet[2083]: E0906 00:21:12.830480 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.832301 kubelet[2083]: W0906 00:21:12.830489 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.832301 kubelet[2083]: E0906 00:21:12.830500 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.832301 kubelet[2083]: E0906 00:21:12.830689 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.832301 kubelet[2083]: W0906 00:21:12.830696 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.832301 kubelet[2083]: E0906 00:21:12.830704 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.832301 kubelet[2083]: E0906 00:21:12.832032 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.832301 kubelet[2083]: W0906 00:21:12.832050 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.832301 kubelet[2083]: E0906 00:21:12.832067 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.832597 kubelet[2083]: E0906 00:21:12.832337 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.832597 kubelet[2083]: W0906 00:21:12.832352 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.832597 kubelet[2083]: E0906 00:21:12.832368 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.832748 kubelet[2083]: E0906 00:21:12.832616 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.832748 kubelet[2083]: W0906 00:21:12.832631 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.832748 kubelet[2083]: E0906 00:21:12.832646 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.841811 env[1286]: time="2025-09-06T00:21:12.840203997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:12.841811 env[1286]: time="2025-09-06T00:21:12.840246051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:12.841811 env[1286]: time="2025-09-06T00:21:12.840256760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:12.841811 env[1286]: time="2025-09-06T00:21:12.840390274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04 pid=2540 runtime=io.containerd.runc.v2 Sep 6 00:21:12.927411 kubelet[2083]: E0906 00:21:12.924900 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.927411 kubelet[2083]: W0906 00:21:12.924931 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.927411 kubelet[2083]: E0906 00:21:12.924970 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.927411 kubelet[2083]: I0906 00:21:12.925010 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbglt\" (UniqueName: \"kubernetes.io/projected/9086b069-766b-4d15-aa22-c0caba04aa75-kube-api-access-pbglt\") pod \"csi-node-driver-sldlw\" (UID: \"9086b069-766b-4d15-aa22-c0caba04aa75\") " pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:12.931104 kubelet[2083]: E0906 00:21:12.929596 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.931104 kubelet[2083]: W0906 00:21:12.929625 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.931104 kubelet[2083]: E0906 00:21:12.929664 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.931104 kubelet[2083]: I0906 00:21:12.929834 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9086b069-766b-4d15-aa22-c0caba04aa75-kubelet-dir\") pod \"csi-node-driver-sldlw\" (UID: \"9086b069-766b-4d15-aa22-c0caba04aa75\") " pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:12.931104 kubelet[2083]: E0906 00:21:12.930052 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.931104 kubelet[2083]: W0906 00:21:12.930065 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.931104 kubelet[2083]: E0906 00:21:12.930091 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.931104 kubelet[2083]: E0906 00:21:12.930263 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.931104 kubelet[2083]: W0906 00:21:12.930270 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.931607 kubelet[2083]: E0906 00:21:12.930281 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.931607 kubelet[2083]: E0906 00:21:12.930455 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.931607 kubelet[2083]: W0906 00:21:12.930462 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.931607 kubelet[2083]: E0906 00:21:12.930472 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.931607 kubelet[2083]: I0906 00:21:12.930492 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9086b069-766b-4d15-aa22-c0caba04aa75-varrun\") pod \"csi-node-driver-sldlw\" (UID: \"9086b069-766b-4d15-aa22-c0caba04aa75\") " pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:12.931607 kubelet[2083]: E0906 00:21:12.931141 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.931607 kubelet[2083]: W0906 00:21:12.931157 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.931607 kubelet[2083]: E0906 00:21:12.931178 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.931607 kubelet[2083]: I0906 00:21:12.931199 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9086b069-766b-4d15-aa22-c0caba04aa75-registration-dir\") pod \"csi-node-driver-sldlw\" (UID: \"9086b069-766b-4d15-aa22-c0caba04aa75\") " pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:12.932460 kubelet[2083]: E0906 00:21:12.932432 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.932460 kubelet[2083]: W0906 00:21:12.932451 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.932615 kubelet[2083]: E0906 00:21:12.932562 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.932615 kubelet[2083]: I0906 00:21:12.932590 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9086b069-766b-4d15-aa22-c0caba04aa75-socket-dir\") pod \"csi-node-driver-sldlw\" (UID: \"9086b069-766b-4d15-aa22-c0caba04aa75\") " pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:12.932927 kubelet[2083]: E0906 00:21:12.932906 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.932927 kubelet[2083]: W0906 00:21:12.932921 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.933062 kubelet[2083]: E0906 00:21:12.933029 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.933278 kubelet[2083]: E0906 00:21:12.933259 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.933278 kubelet[2083]: W0906 00:21:12.933273 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.933401 kubelet[2083]: E0906 00:21:12.933353 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.933603 kubelet[2083]: E0906 00:21:12.933585 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.933603 kubelet[2083]: W0906 00:21:12.933604 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.933761 kubelet[2083]: E0906 00:21:12.933683 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.933983 kubelet[2083]: E0906 00:21:12.933965 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.933983 kubelet[2083]: W0906 00:21:12.933977 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.934109 kubelet[2083]: E0906 00:21:12.933992 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.934393 kubelet[2083]: E0906 00:21:12.934366 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.934393 kubelet[2083]: W0906 00:21:12.934386 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.934528 kubelet[2083]: E0906 00:21:12.934397 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.935819 kubelet[2083]: E0906 00:21:12.934734 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.935819 kubelet[2083]: W0906 00:21:12.934757 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.935819 kubelet[2083]: E0906 00:21:12.934769 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.936056 kubelet[2083]: E0906 00:21:12.935840 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.936056 kubelet[2083]: W0906 00:21:12.935852 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.936056 kubelet[2083]: E0906 00:21:12.935865 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.936410 kubelet[2083]: E0906 00:21:12.936313 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:12.936410 kubelet[2083]: W0906 00:21:12.936326 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:12.936410 kubelet[2083]: E0906 00:21:12.936341 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:12.967000 audit[2604]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:12.968989 kubelet[2083]: E0906 00:21:12.968943 2083 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podf7ed99db-e99d-480f-b129-6491240fffb0/7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\": RecentStats: unable to find data in memory cache]" Sep 6 00:21:12.967000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe7fe57dc0 a2=0 a3=7ffe7fe57dac items=0 ppid=2208 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:12.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:12.970000 audit[2604]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:12.970000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7fe57dc0 a2=0 a3=0 items=0 ppid=2208 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:12.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:12.989754 env[1286]: time="2025-09-06T00:21:12.988532498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mztrq,Uid:f7ed99db-e99d-480f-b129-6491240fffb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\"" Sep 6 00:21:13.034208 kubelet[2083]: E0906 00:21:13.034156 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.034504 kubelet[2083]: W0906 00:21:13.034471 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.034736 kubelet[2083]: E0906 00:21:13.034689 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.035282 kubelet[2083]: E0906 00:21:13.035260 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.035447 kubelet[2083]: W0906 00:21:13.035423 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.035616 kubelet[2083]: E0906 00:21:13.035595 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.036058 kubelet[2083]: E0906 00:21:13.036035 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.036197 kubelet[2083]: W0906 00:21:13.036173 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.036341 kubelet[2083]: E0906 00:21:13.036319 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.036888 kubelet[2083]: E0906 00:21:13.036864 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.036888 kubelet[2083]: W0906 00:21:13.036884 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.037063 kubelet[2083]: E0906 00:21:13.036910 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.037690 kubelet[2083]: E0906 00:21:13.037660 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.037690 kubelet[2083]: W0906 00:21:13.037684 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.037934 kubelet[2083]: E0906 00:21:13.037905 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.038920 kubelet[2083]: E0906 00:21:13.038878 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.038920 kubelet[2083]: W0906 00:21:13.038894 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.039147 kubelet[2083]: E0906 00:21:13.039119 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.039847 kubelet[2083]: E0906 00:21:13.039821 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.039847 kubelet[2083]: W0906 00:21:13.039841 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.040041 kubelet[2083]: E0906 00:21:13.040021 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.040912 kubelet[2083]: E0906 00:21:13.040871 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.040912 kubelet[2083]: W0906 00:21:13.040890 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.041150 kubelet[2083]: E0906 00:21:13.041125 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.041274 kubelet[2083]: E0906 00:21:13.041146 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.041390 kubelet[2083]: W0906 00:21:13.041368 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.041630 kubelet[2083]: E0906 00:21:13.041607 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.043006 kubelet[2083]: E0906 00:21:13.042982 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.043136 kubelet[2083]: W0906 00:21:13.043114 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.043380 kubelet[2083]: E0906 00:21:13.043359 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.044157 kubelet[2083]: E0906 00:21:13.044135 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.044306 kubelet[2083]: W0906 00:21:13.044281 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.044545 kubelet[2083]: E0906 00:21:13.044525 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.044740 kubelet[2083]: E0906 00:21:13.044700 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.044861 kubelet[2083]: W0906 00:21:13.044841 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.046501 kubelet[2083]: E0906 00:21:13.046467 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.047090 kubelet[2083]: E0906 00:21:13.046851 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.050889 kubelet[2083]: W0906 00:21:13.050821 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.052317 kubelet[2083]: E0906 00:21:13.052271 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.055003 kubelet[2083]: E0906 00:21:13.054893 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.055003 kubelet[2083]: W0906 00:21:13.054925 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.055205 kubelet[2083]: E0906 00:21:13.055114 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.055536 kubelet[2083]: E0906 00:21:13.055499 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.055536 kubelet[2083]: W0906 00:21:13.055524 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.056278 kubelet[2083]: E0906 00:21:13.055659 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.059349 kubelet[2083]: E0906 00:21:13.057570 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.059349 kubelet[2083]: W0906 00:21:13.057598 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.059349 kubelet[2083]: E0906 00:21:13.057827 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.060957 kubelet[2083]: E0906 00:21:13.060197 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.060957 kubelet[2083]: W0906 00:21:13.060225 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.060957 kubelet[2083]: E0906 00:21:13.060330 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.060957 kubelet[2083]: E0906 00:21:13.060632 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.060957 kubelet[2083]: W0906 00:21:13.060646 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.060957 kubelet[2083]: E0906 00:21:13.060742 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.061456 kubelet[2083]: E0906 00:21:13.060982 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.061456 kubelet[2083]: W0906 00:21:13.060994 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.061456 kubelet[2083]: E0906 00:21:13.061085 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.062889 kubelet[2083]: E0906 00:21:13.062838 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.062889 kubelet[2083]: W0906 00:21:13.062864 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.063208 kubelet[2083]: E0906 00:21:13.063067 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.065090 kubelet[2083]: E0906 00:21:13.063657 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.065090 kubelet[2083]: W0906 00:21:13.063679 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.065090 kubelet[2083]: E0906 00:21:13.063894 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.065090 kubelet[2083]: E0906 00:21:13.064117 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.065090 kubelet[2083]: W0906 00:21:13.064129 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.065090 kubelet[2083]: E0906 00:21:13.064236 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.065090 kubelet[2083]: E0906 00:21:13.064683 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.065090 kubelet[2083]: W0906 00:21:13.064700 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.066758 kubelet[2083]: E0906 00:21:13.066673 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.069812 kubelet[2083]: E0906 00:21:13.068285 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.069812 kubelet[2083]: W0906 00:21:13.068306 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.069812 kubelet[2083]: E0906 00:21:13.068337 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.069812 kubelet[2083]: E0906 00:21:13.068860 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.069812 kubelet[2083]: W0906 00:21:13.068876 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.069812 kubelet[2083]: E0906 00:21:13.068897 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.077130 kubelet[2083]: E0906 00:21:13.077078 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:13.077130 kubelet[2083]: W0906 00:21:13.077109 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:13.077130 kubelet[2083]: E0906 00:21:13.077137 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:13.424907 systemd[1]: run-containerd-runc-k8s.io-da8707a79dbbb0f623c24d075957cffaa96c8e95a94cf94d869c427691d57da4-runc.Uzx24z.mount: Deactivated successfully. Sep 6 00:21:14.244852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622536312.mount: Deactivated successfully. Sep 6 00:21:14.611154 kubelet[2083]: E0906 00:21:14.610130 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:15.819498 env[1286]: time="2025-09-06T00:21:15.819452650Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.822100 env[1286]: time="2025-09-06T00:21:15.822065363Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.824022 env[1286]: time="2025-09-06T00:21:15.823990113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.827364 env[1286]: time="2025-09-06T00:21:15.827330925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:15.828081 env[1286]: time="2025-09-06T00:21:15.828050699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 6 00:21:15.830032 env[1286]: time="2025-09-06T00:21:15.830002353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 00:21:15.854387 env[1286]: time="2025-09-06T00:21:15.854222554Z" level=info msg="CreateContainer within sandbox \"da8707a79dbbb0f623c24d075957cffaa96c8e95a94cf94d869c427691d57da4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 00:21:15.867476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485696804.mount: Deactivated successfully. Sep 6 00:21:15.871717 env[1286]: time="2025-09-06T00:21:15.871658949Z" level=info msg="CreateContainer within sandbox \"da8707a79dbbb0f623c24d075957cffaa96c8e95a94cf94d869c427691d57da4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12fea637c768f25ed1b70f00d8bba5cc7c1f5640e1e91e1c1a1210675c6c9788\"" Sep 6 00:21:15.872496 env[1286]: time="2025-09-06T00:21:15.872467685Z" level=info msg="StartContainer for \"12fea637c768f25ed1b70f00d8bba5cc7c1f5640e1e91e1c1a1210675c6c9788\"" Sep 6 00:21:15.995816 env[1286]: time="2025-09-06T00:21:15.995773482Z" level=info msg="StartContainer for \"12fea637c768f25ed1b70f00d8bba5cc7c1f5640e1e91e1c1a1210675c6c9788\" returns successfully" Sep 6 00:21:16.615067 kubelet[2083]: E0906 00:21:16.615021 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:16.755625 kubelet[2083]: E0906 00:21:16.755586 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:16.761223 kubelet[2083]: E0906 00:21:16.761187 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.761223 kubelet[2083]: W0906 00:21:16.761214 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.761421 kubelet[2083]: E0906 00:21:16.761237 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.762352 kubelet[2083]: E0906 00:21:16.762328 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.762352 kubelet[2083]: W0906 00:21:16.762346 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.762543 kubelet[2083]: E0906 00:21:16.762367 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.762956 kubelet[2083]: E0906 00:21:16.762932 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.762956 kubelet[2083]: W0906 00:21:16.762945 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.762956 kubelet[2083]: E0906 00:21:16.762957 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.763624 kubelet[2083]: E0906 00:21:16.763589 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.763624 kubelet[2083]: W0906 00:21:16.763603 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.763624 kubelet[2083]: E0906 00:21:16.763616 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.765255 kubelet[2083]: E0906 00:21:16.765233 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.765255 kubelet[2083]: W0906 00:21:16.765250 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.765380 kubelet[2083]: E0906 00:21:16.765262 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.765481 kubelet[2083]: E0906 00:21:16.765466 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.765481 kubelet[2083]: W0906 00:21:16.765478 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.765576 kubelet[2083]: E0906 00:21:16.765488 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.765656 kubelet[2083]: E0906 00:21:16.765643 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.765656 kubelet[2083]: W0906 00:21:16.765654 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.765775 kubelet[2083]: E0906 00:21:16.765663 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.765935 kubelet[2083]: E0906 00:21:16.765919 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.765935 kubelet[2083]: W0906 00:21:16.765932 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766029 kubelet[2083]: E0906 00:21:16.765942 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.766122 kubelet[2083]: E0906 00:21:16.766102 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.766122 kubelet[2083]: W0906 00:21:16.766112 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766211 kubelet[2083]: E0906 00:21:16.766121 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.766280 kubelet[2083]: E0906 00:21:16.766267 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.766280 kubelet[2083]: W0906 00:21:16.766277 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766363 kubelet[2083]: E0906 00:21:16.766285 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.766632 kubelet[2083]: E0906 00:21:16.766448 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.766632 kubelet[2083]: W0906 00:21:16.766465 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766632 kubelet[2083]: E0906 00:21:16.766478 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.766846 kubelet[2083]: E0906 00:21:16.766688 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.766846 kubelet[2083]: W0906 00:21:16.766696 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766846 kubelet[2083]: E0906 00:21:16.766704 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.766981 kubelet[2083]: E0906 00:21:16.766885 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.766981 kubelet[2083]: W0906 00:21:16.766892 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.766981 kubelet[2083]: E0906 00:21:16.766900 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.767071 kubelet[2083]: E0906 00:21:16.767022 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.767071 kubelet[2083]: W0906 00:21:16.767028 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.767071 kubelet[2083]: E0906 00:21:16.767035 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.767176 kubelet[2083]: E0906 00:21:16.767160 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.767176 kubelet[2083]: W0906 00:21:16.767170 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.767243 kubelet[2083]: E0906 00:21:16.767178 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767417 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.771745 kubelet[2083]: W0906 00:21:16.767429 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767440 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767632 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.771745 kubelet[2083]: W0906 00:21:16.767640 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767650 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767837 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.771745 kubelet[2083]: W0906 00:21:16.767843 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.767855 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.771745 kubelet[2083]: E0906 00:21:16.768040 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772222 kubelet[2083]: W0906 00:21:16.768050 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768062 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768225 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772222 kubelet[2083]: W0906 00:21:16.768232 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768242 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768384 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772222 kubelet[2083]: W0906 00:21:16.768390 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768399 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772222 kubelet[2083]: E0906 00:21:16.768591 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772222 kubelet[2083]: W0906 00:21:16.768598 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.768608 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.768949 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772523 kubelet[2083]: W0906 00:21:16.768958 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.769016 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.769168 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772523 kubelet[2083]: W0906 00:21:16.769175 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.769240 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.769342 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772523 kubelet[2083]: W0906 00:21:16.769349 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772523 kubelet[2083]: E0906 00:21:16.769359 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769504 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772822 kubelet[2083]: W0906 00:21:16.769511 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769521 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769662 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772822 kubelet[2083]: W0906 00:21:16.769668 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769677 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769883 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.772822 kubelet[2083]: W0906 00:21:16.769891 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.769944 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.772822 kubelet[2083]: E0906 00:21:16.770407 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.773225 kubelet[2083]: W0906 00:21:16.770428 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.773225 kubelet[2083]: E0906 00:21:16.770446 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.773225 kubelet[2083]: E0906 00:21:16.770679 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.773225 kubelet[2083]: W0906 00:21:16.770689 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.773225 kubelet[2083]: E0906 00:21:16.770830 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.773225 kubelet[2083]: E0906 00:21:16.771175 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.773225 kubelet[2083]: W0906 00:21:16.772869 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.773225 kubelet[2083]: E0906 00:21:16.772893 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.774042 kubelet[2083]: E0906 00:21:16.773968 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.774042 kubelet[2083]: W0906 00:21:16.773983 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.774042 kubelet[2083]: E0906 00:21:16.773999 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.774222 kubelet[2083]: E0906 00:21:16.774168 2083 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:21:16.774222 kubelet[2083]: W0906 00:21:16.774175 2083 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:21:16.774222 kubelet[2083]: E0906 00:21:16.774183 2083 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:21:16.778692 kubelet[2083]: I0906 00:21:16.778632 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-788d77b55d-t6txf" podStartSLOduration=1.58712565 podStartE2EDuration="4.778616023s" podCreationTimestamp="2025-09-06 00:21:12 +0000 UTC" firstStartedPulling="2025-09-06 00:21:12.637964489 +0000 UTC m=+20.287200681" lastFinishedPulling="2025-09-06 00:21:15.829454875 +0000 UTC m=+23.478691054" observedRunningTime="2025-09-06 00:21:16.778254032 +0000 UTC m=+24.427490233" watchObservedRunningTime="2025-09-06 00:21:16.778616023 +0000 UTC m=+24.427852223" Sep 6 00:21:17.596053 env[1286]: time="2025-09-06T00:21:17.595949960Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.598788 env[1286]: time="2025-09-06T00:21:17.598746375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.601911 env[1286]: time="2025-09-06T00:21:17.600130671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.605195 env[1286]: time="2025-09-06T00:21:17.602084496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:17.605195 env[1286]: time="2025-09-06T00:21:17.602866315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 6 00:21:17.619325 env[1286]: time="2025-09-06T00:21:17.618821403Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 00:21:17.633824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19353432.mount: Deactivated successfully. Sep 6 00:21:17.637578 env[1286]: time="2025-09-06T00:21:17.637524217Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b\"" Sep 6 00:21:17.639856 env[1286]: time="2025-09-06T00:21:17.638390893Z" level=info msg="StartContainer for \"c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b\"" Sep 6 00:21:17.739696 env[1286]: time="2025-09-06T00:21:17.739645497Z" level=info msg="StartContainer for \"c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b\" returns successfully" Sep 6 00:21:17.761814 kubelet[2083]: I0906 00:21:17.761771 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:17.762305 kubelet[2083]: E0906 00:21:17.762235 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:17.805122 env[1286]: time="2025-09-06T00:21:17.805074800Z" level=info msg="shim disconnected" id=c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b Sep 6 00:21:17.806066 env[1286]: time="2025-09-06T00:21:17.806039652Z" level=warning msg="cleaning up after shim disconnected" id=c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b namespace=k8s.io Sep 6 00:21:17.806251 env[1286]: time="2025-09-06T00:21:17.806228170Z" level=info msg="cleaning up dead shim" Sep 6 00:21:17.819432 env[1286]: time="2025-09-06T00:21:17.819296975Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2769 runtime=io.containerd.runc.v2\n" Sep 6 00:21:17.839087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c805f25777ad903007ac05f6c607f4642a6a62f920c640cd4771b13ab65bac8b-rootfs.mount: Deactivated successfully. Sep 6 00:21:18.610783 kubelet[2083]: E0906 00:21:18.610730 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:18.766790 env[1286]: time="2025-09-06T00:21:18.766752032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 00:21:20.610372 kubelet[2083]: E0906 00:21:20.609856 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:22.611460 kubelet[2083]: E0906 00:21:22.610520 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:23.525670 env[1286]: time="2025-09-06T00:21:23.525612331Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:23.527462 env[1286]: time="2025-09-06T00:21:23.527426038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:23.528942 env[1286]: time="2025-09-06T00:21:23.528911338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:23.530539 env[1286]: time="2025-09-06T00:21:23.530497074Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:23.531099 env[1286]: time="2025-09-06T00:21:23.531062678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 6 00:21:23.534347 env[1286]: time="2025-09-06T00:21:23.533457059Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 00:21:23.546235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528672041.mount: Deactivated successfully. Sep 6 00:21:23.557088 env[1286]: time="2025-09-06T00:21:23.557030357Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b\"" Sep 6 00:21:23.558812 env[1286]: time="2025-09-06T00:21:23.557978909Z" level=info msg="StartContainer for \"6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b\"" Sep 6 00:21:23.662754 env[1286]: time="2025-09-06T00:21:23.658007950Z" level=info msg="StartContainer for \"6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b\" returns successfully" Sep 6 00:21:24.359463 env[1286]: time="2025-09-06T00:21:24.359385264Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:21:24.384820 kubelet[2083]: I0906 00:21:24.384428 2083 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:21:24.399046 env[1286]: time="2025-09-06T00:21:24.398981215Z" level=info msg="shim disconnected" id=6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b Sep 6 00:21:24.399318 env[1286]: time="2025-09-06T00:21:24.399294148Z" level=warning msg="cleaning up after shim disconnected" id=6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b namespace=k8s.io Sep 6 00:21:24.399408 env[1286]: time="2025-09-06T00:21:24.399391155Z" level=info msg="cleaning up dead shim" Sep 6 00:21:24.469837 env[1286]: time="2025-09-06T00:21:24.469787481Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2829 runtime=io.containerd.runc.v2\n" Sep 6 00:21:24.529990 kubelet[2083]: I0906 00:21:24.529250 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64t5v\" (UniqueName: \"kubernetes.io/projected/68db3c5f-2540-494b-b7e8-8a05c1287332-kube-api-access-64t5v\") pod \"coredns-7c65d6cfc9-wghq8\" (UID: \"68db3c5f-2540-494b-b7e8-8a05c1287332\") " pod="kube-system/coredns-7c65d6cfc9-wghq8" Sep 6 00:21:24.529990 kubelet[2083]: I0906 00:21:24.529348 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b8e70a2-2b3d-4784-830f-5560042446fa-calico-apiserver-certs\") pod \"calico-apiserver-69b54c6ffc-ffpz9\" (UID: \"6b8e70a2-2b3d-4784-830f-5560042446fa\") " pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" Sep 6 00:21:24.529990 kubelet[2083]: I0906 00:21:24.529403 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68db3c5f-2540-494b-b7e8-8a05c1287332-config-volume\") pod \"coredns-7c65d6cfc9-wghq8\" (UID: \"68db3c5f-2540-494b-b7e8-8a05c1287332\") " pod="kube-system/coredns-7c65d6cfc9-wghq8" Sep 6 00:21:24.529990 kubelet[2083]: I0906 00:21:24.529449 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbb4\" (UniqueName: \"kubernetes.io/projected/6b8e70a2-2b3d-4784-830f-5560042446fa-kube-api-access-9xbb4\") pod \"calico-apiserver-69b54c6ffc-ffpz9\" (UID: \"6b8e70a2-2b3d-4784-830f-5560042446fa\") " pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" Sep 6 00:21:24.542618 systemd[1]: run-containerd-runc-k8s.io-6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b-runc.vumSYB.mount: Deactivated successfully. Sep 6 00:21:24.542809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e08dca61fc0bf69ef884fe498e990b76736950dd20b15a1fa1effcc1910f18b-rootfs.mount: Deactivated successfully. Sep 6 00:21:24.614204 env[1286]: time="2025-09-06T00:21:24.613564536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sldlw,Uid:9086b069-766b-4d15-aa22-c0caba04aa75,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:24.638291 kubelet[2083]: I0906 00:21:24.638242 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/600d572a-5949-4e09-892e-2ade90d1ec2c-goldmane-key-pair\") pod \"goldmane-7988f88666-m85r4\" (UID: \"600d572a-5949-4e09-892e-2ade90d1ec2c\") " pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:24.638291 kubelet[2083]: I0906 00:21:24.638291 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-backend-key-pair\") pod \"whisker-776955cbdc-vpgd2\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " pod="calico-system/whisker-776955cbdc-vpgd2" Sep 6 00:21:24.638493 kubelet[2083]: I0906 00:21:24.638315 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-ca-bundle\") pod \"whisker-776955cbdc-vpgd2\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " pod="calico-system/whisker-776955cbdc-vpgd2" Sep 6 00:21:24.638493 kubelet[2083]: I0906 00:21:24.638441 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56rn\" (UniqueName: \"kubernetes.io/projected/46a258a2-0c86-4099-9cef-acdc41e5ed88-kube-api-access-z56rn\") pod \"whisker-776955cbdc-vpgd2\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " pod="calico-system/whisker-776955cbdc-vpgd2" Sep 6 00:21:24.638607 kubelet[2083]: I0906 00:21:24.638512 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpvd\" (UniqueName: \"kubernetes.io/projected/319af4ab-7a09-4b77-906d-ff7b3b6e2b69-kube-api-access-djpvd\") pod \"calico-kube-controllers-7bb788675f-mrpct\" (UID: \"319af4ab-7a09-4b77-906d-ff7b3b6e2b69\") " pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" Sep 6 00:21:24.638607 kubelet[2083]: I0906 00:21:24.638541 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/600d572a-5949-4e09-892e-2ade90d1ec2c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-m85r4\" (UID: \"600d572a-5949-4e09-892e-2ade90d1ec2c\") " pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:24.638607 kubelet[2083]: I0906 00:21:24.638591 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b20a7ea0-580e-468e-bb6b-4ff362ba7f7c-calico-apiserver-certs\") pod \"calico-apiserver-69b54c6ffc-xjcwv\" (UID: \"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c\") " pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" Sep 6 00:21:24.638910 kubelet[2083]: I0906 00:21:24.638868 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fe05095-1fb2-4aab-8036-ef685518a4c9-config-volume\") pod \"coredns-7c65d6cfc9-4vsk6\" (UID: \"1fe05095-1fb2-4aab-8036-ef685518a4c9\") " pod="kube-system/coredns-7c65d6cfc9-4vsk6" Sep 6 00:21:24.638992 kubelet[2083]: I0906 00:21:24.638912 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48kp\" (UniqueName: \"kubernetes.io/projected/600d572a-5949-4e09-892e-2ade90d1ec2c-kube-api-access-w48kp\") pod \"goldmane-7988f88666-m85r4\" (UID: \"600d572a-5949-4e09-892e-2ade90d1ec2c\") " pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:24.638992 kubelet[2083]: I0906 00:21:24.638936 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhdz\" (UniqueName: \"kubernetes.io/projected/b20a7ea0-580e-468e-bb6b-4ff362ba7f7c-kube-api-access-kjhdz\") pod \"calico-apiserver-69b54c6ffc-xjcwv\" (UID: \"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c\") " pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" Sep 6 00:21:24.638992 kubelet[2083]: I0906 00:21:24.638970 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkrv\" (UniqueName: \"kubernetes.io/projected/1fe05095-1fb2-4aab-8036-ef685518a4c9-kube-api-access-6qkrv\") pod \"coredns-7c65d6cfc9-4vsk6\" (UID: \"1fe05095-1fb2-4aab-8036-ef685518a4c9\") " pod="kube-system/coredns-7c65d6cfc9-4vsk6" Sep 6 00:21:24.639132 kubelet[2083]: I0906 00:21:24.638996 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600d572a-5949-4e09-892e-2ade90d1ec2c-config\") pod \"goldmane-7988f88666-m85r4\" (UID: \"600d572a-5949-4e09-892e-2ade90d1ec2c\") " pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:24.639132 kubelet[2083]: I0906 00:21:24.639026 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/319af4ab-7a09-4b77-906d-ff7b3b6e2b69-tigera-ca-bundle\") pod \"calico-kube-controllers-7bb788675f-mrpct\" (UID: \"319af4ab-7a09-4b77-906d-ff7b3b6e2b69\") " pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" Sep 6 00:21:24.732741 kubelet[2083]: E0906 00:21:24.732664 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:24.735107 env[1286]: time="2025-09-06T00:21:24.735017829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-ffpz9,Uid:6b8e70a2-2b3d-4784-830f-5560042446fa,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:21:24.735456 env[1286]: time="2025-09-06T00:21:24.735408565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wghq8,Uid:68db3c5f-2540-494b-b7e8-8a05c1287332,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:24.791951 env[1286]: time="2025-09-06T00:21:24.791891131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-m85r4,Uid:600d572a-5949-4e09-892e-2ade90d1ec2c,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:24.814285 env[1286]: time="2025-09-06T00:21:24.812802726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 00:21:24.941956 env[1286]: time="2025-09-06T00:21:24.941731729Z" level=error msg="Failed to destroy network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.943116 env[1286]: time="2025-09-06T00:21:24.943037425Z" level=error msg="encountered an error cleaning up failed sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.943334 env[1286]: time="2025-09-06T00:21:24.943132041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sldlw,Uid:9086b069-766b-4d15-aa22-c0caba04aa75,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.943556 kubelet[2083]: E0906 00:21:24.943497 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.943668 kubelet[2083]: E0906 00:21:24.943633 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:24.943749 kubelet[2083]: E0906 00:21:24.943680 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sldlw" Sep 6 00:21:24.943833 kubelet[2083]: E0906 00:21:24.943791 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sldlw_calico-system(9086b069-766b-4d15-aa22-c0caba04aa75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sldlw_calico-system(9086b069-766b-4d15-aa22-c0caba04aa75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:24.985171 env[1286]: time="2025-09-06T00:21:24.985073104Z" level=error msg="Failed to destroy network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.985593 env[1286]: time="2025-09-06T00:21:24.985541420Z" level=error msg="encountered an error cleaning up failed sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.985747 env[1286]: time="2025-09-06T00:21:24.985620966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wghq8,Uid:68db3c5f-2540-494b-b7e8-8a05c1287332,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.986029 kubelet[2083]: E0906 00:21:24.985981 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:24.986113 kubelet[2083]: E0906 00:21:24.986079 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wghq8" Sep 6 00:21:24.986151 kubelet[2083]: E0906 00:21:24.986127 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wghq8" Sep 6 00:21:24.986259 kubelet[2083]: E0906 00:21:24.986217 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wghq8_kube-system(68db3c5f-2540-494b-b7e8-8a05c1287332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wghq8_kube-system(68db3c5f-2540-494b-b7e8-8a05c1287332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wghq8" podUID="68db3c5f-2540-494b-b7e8-8a05c1287332" Sep 6 00:21:25.008446 env[1286]: time="2025-09-06T00:21:25.008372754Z" level=error msg="Failed to destroy network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.008940 env[1286]: time="2025-09-06T00:21:25.008876810Z" level=error msg="encountered an error cleaning up failed sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.009077 env[1286]: time="2025-09-06T00:21:25.008966093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-ffpz9,Uid:6b8e70a2-2b3d-4784-830f-5560042446fa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.009345 kubelet[2083]: E0906 00:21:25.009298 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.009473 kubelet[2083]: E0906 00:21:25.009391 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" Sep 6 00:21:25.009473 kubelet[2083]: E0906 00:21:25.009423 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" Sep 6 00:21:25.009571 kubelet[2083]: E0906 00:21:25.009515 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b54c6ffc-ffpz9_calico-apiserver(6b8e70a2-2b3d-4784-830f-5560042446fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b54c6ffc-ffpz9_calico-apiserver(6b8e70a2-2b3d-4784-830f-5560042446fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" podUID="6b8e70a2-2b3d-4784-830f-5560042446fa" Sep 6 00:21:25.016218 env[1286]: time="2025-09-06T00:21:25.016121010Z" level=error msg="Failed to destroy network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.016639 env[1286]: time="2025-09-06T00:21:25.016588739Z" level=error msg="encountered an error cleaning up failed sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.016693 env[1286]: time="2025-09-06T00:21:25.016667794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-m85r4,Uid:600d572a-5949-4e09-892e-2ade90d1ec2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.017038 kubelet[2083]: E0906 00:21:25.016982 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.017159 kubelet[2083]: E0906 00:21:25.017076 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:25.017159 kubelet[2083]: E0906 00:21:25.017118 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-m85r4" Sep 6 00:21:25.017800 kubelet[2083]: E0906 00:21:25.017204 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-m85r4_calico-system(600d572a-5949-4e09-892e-2ade90d1ec2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-m85r4_calico-system(600d572a-5949-4e09-892e-2ade90d1ec2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-m85r4" podUID="600d572a-5949-4e09-892e-2ade90d1ec2c" Sep 6 00:21:25.080506 kubelet[2083]: E0906 00:21:25.080189 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:25.081067 env[1286]: time="2025-09-06T00:21:25.081029973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vsk6,Uid:1fe05095-1fb2-4aab-8036-ef685518a4c9,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:25.082839 env[1286]: time="2025-09-06T00:21:25.082799468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb788675f-mrpct,Uid:319af4ab-7a09-4b77-906d-ff7b3b6e2b69,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:25.085808 env[1286]: time="2025-09-06T00:21:25.085761854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776955cbdc-vpgd2,Uid:46a258a2-0c86-4099-9cef-acdc41e5ed88,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:25.088966 env[1286]: time="2025-09-06T00:21:25.088910729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-xjcwv,Uid:b20a7ea0-580e-468e-bb6b-4ff362ba7f7c,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:21:25.259143 env[1286]: time="2025-09-06T00:21:25.259012825Z" level=error msg="Failed to destroy network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.260223 env[1286]: time="2025-09-06T00:21:25.260177526Z" level=error msg="encountered an error cleaning up failed sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.260316 env[1286]: time="2025-09-06T00:21:25.260236130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb788675f-mrpct,Uid:319af4ab-7a09-4b77-906d-ff7b3b6e2b69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.260733 kubelet[2083]: E0906 00:21:25.260549 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.260733 kubelet[2083]: E0906 00:21:25.260618 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" Sep 6 00:21:25.260733 kubelet[2083]: E0906 00:21:25.260639 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" Sep 6 00:21:25.260919 kubelet[2083]: E0906 00:21:25.260685 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bb788675f-mrpct_calico-system(319af4ab-7a09-4b77-906d-ff7b3b6e2b69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bb788675f-mrpct_calico-system(319af4ab-7a09-4b77-906d-ff7b3b6e2b69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" podUID="319af4ab-7a09-4b77-906d-ff7b3b6e2b69" Sep 6 00:21:25.294348 env[1286]: time="2025-09-06T00:21:25.294281427Z" level=error msg="Failed to destroy network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.295125 env[1286]: time="2025-09-06T00:21:25.295061015Z" level=error msg="encountered an error cleaning up failed sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.295393 env[1286]: time="2025-09-06T00:21:25.295364654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-xjcwv,Uid:b20a7ea0-580e-468e-bb6b-4ff362ba7f7c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.295800 env[1286]: time="2025-09-06T00:21:25.295317990Z" level=error msg="Failed to destroy network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.295892 kubelet[2083]: E0906 00:21:25.295799 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.295967 kubelet[2083]: E0906 00:21:25.295880 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" Sep 6 00:21:25.295967 kubelet[2083]: E0906 00:21:25.295911 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" Sep 6 00:21:25.296097 kubelet[2083]: E0906 00:21:25.295971 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b54c6ffc-xjcwv_calico-apiserver(b20a7ea0-580e-468e-bb6b-4ff362ba7f7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b54c6ffc-xjcwv_calico-apiserver(b20a7ea0-580e-468e-bb6b-4ff362ba7f7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" podUID="b20a7ea0-580e-468e-bb6b-4ff362ba7f7c" Sep 6 00:21:25.296658 env[1286]: time="2025-09-06T00:21:25.296625578Z" level=error msg="encountered an error cleaning up failed sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.296823 env[1286]: time="2025-09-06T00:21:25.296789904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vsk6,Uid:1fe05095-1fb2-4aab-8036-ef685518a4c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.298036 kubelet[2083]: E0906 00:21:25.297871 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.298036 kubelet[2083]: E0906 00:21:25.297931 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4vsk6" Sep 6 00:21:25.298036 kubelet[2083]: E0906 00:21:25.297951 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4vsk6" Sep 6 00:21:25.298186 kubelet[2083]: E0906 00:21:25.297989 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4vsk6_kube-system(1fe05095-1fb2-4aab-8036-ef685518a4c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4vsk6_kube-system(1fe05095-1fb2-4aab-8036-ef685518a4c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4vsk6" podUID="1fe05095-1fb2-4aab-8036-ef685518a4c9" Sep 6 00:21:25.307793 env[1286]: time="2025-09-06T00:21:25.307731701Z" level=error msg="Failed to destroy network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.308309 env[1286]: time="2025-09-06T00:21:25.308269658Z" level=error msg="encountered an error cleaning up failed sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.308480 env[1286]: time="2025-09-06T00:21:25.308441671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776955cbdc-vpgd2,Uid:46a258a2-0c86-4099-9cef-acdc41e5ed88,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.308910 kubelet[2083]: E0906 00:21:25.308853 2083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.309013 kubelet[2083]: E0906 00:21:25.308928 2083 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-776955cbdc-vpgd2" Sep 6 00:21:25.309013 kubelet[2083]: E0906 00:21:25.308968 2083 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-776955cbdc-vpgd2" Sep 6 00:21:25.309091 kubelet[2083]: E0906 00:21:25.309023 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-776955cbdc-vpgd2_calico-system(46a258a2-0c86-4099-9cef-acdc41e5ed88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-776955cbdc-vpgd2_calico-system(46a258a2-0c86-4099-9cef-acdc41e5ed88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-776955cbdc-vpgd2" podUID="46a258a2-0c86-4099-9cef-acdc41e5ed88" Sep 6 00:21:25.555118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff-shm.mount: Deactivated successfully. Sep 6 00:21:25.814053 kubelet[2083]: I0906 00:21:25.813097 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:25.815283 env[1286]: time="2025-09-06T00:21:25.815228488Z" level=info msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" Sep 6 00:21:25.817283 kubelet[2083]: I0906 00:21:25.816733 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:25.819809 env[1286]: time="2025-09-06T00:21:25.817872151Z" level=info msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" Sep 6 00:21:25.824558 kubelet[2083]: I0906 00:21:25.823336 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:25.825099 env[1286]: time="2025-09-06T00:21:25.825046541Z" level=info msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" Sep 6 00:21:25.827475 kubelet[2083]: I0906 00:21:25.826984 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:25.827851 env[1286]: time="2025-09-06T00:21:25.827821420Z" level=info msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" Sep 6 00:21:25.831291 kubelet[2083]: I0906 00:21:25.830644 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:25.832113 env[1286]: time="2025-09-06T00:21:25.831971170Z" level=info msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" Sep 6 00:21:25.837013 kubelet[2083]: I0906 00:21:25.836320 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:25.838483 env[1286]: time="2025-09-06T00:21:25.838438825Z" level=info msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" Sep 6 00:21:25.840823 kubelet[2083]: I0906 00:21:25.840026 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:25.841176 env[1286]: time="2025-09-06T00:21:25.841141737Z" level=info msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" Sep 6 00:21:25.843864 kubelet[2083]: I0906 00:21:25.843274 2083 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:25.844723 env[1286]: time="2025-09-06T00:21:25.844479790Z" level=info msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" Sep 6 00:21:25.929808 env[1286]: time="2025-09-06T00:21:25.929693011Z" level=error msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" failed" error="failed to destroy network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.930427 kubelet[2083]: E0906 00:21:25.930244 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:25.930427 kubelet[2083]: E0906 00:21:25.930308 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3"} Sep 6 00:21:25.930427 kubelet[2083]: E0906 00:21:25.930369 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68db3c5f-2540-494b-b7e8-8a05c1287332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:25.930427 kubelet[2083]: E0906 00:21:25.930392 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68db3c5f-2540-494b-b7e8-8a05c1287332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wghq8" podUID="68db3c5f-2540-494b-b7e8-8a05c1287332" Sep 6 00:21:25.972112 env[1286]: time="2025-09-06T00:21:25.972052181Z" level=error msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" failed" error="failed to destroy network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:25.972688 kubelet[2083]: E0906 00:21:25.972524 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:25.972688 kubelet[2083]: E0906 00:21:25.972574 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c"} Sep 6 00:21:25.972688 kubelet[2083]: E0906 00:21:25.972610 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"319af4ab-7a09-4b77-906d-ff7b3b6e2b69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:25.972688 kubelet[2083]: E0906 00:21:25.972633 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"319af4ab-7a09-4b77-906d-ff7b3b6e2b69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" podUID="319af4ab-7a09-4b77-906d-ff7b3b6e2b69" Sep 6 00:21:26.001974 env[1286]: time="2025-09-06T00:21:26.001897385Z" level=error msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" failed" error="failed to destroy network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.002484 kubelet[2083]: E0906 00:21:26.002302 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:26.002484 kubelet[2083]: E0906 00:21:26.002357 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff"} Sep 6 00:21:26.002484 kubelet[2083]: E0906 00:21:26.002399 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9086b069-766b-4d15-aa22-c0caba04aa75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.002484 kubelet[2083]: E0906 00:21:26.002423 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9086b069-766b-4d15-aa22-c0caba04aa75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sldlw" podUID="9086b069-766b-4d15-aa22-c0caba04aa75" Sep 6 00:21:26.007954 env[1286]: time="2025-09-06T00:21:26.007878136Z" level=error msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" failed" error="failed to destroy network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.008470 kubelet[2083]: E0906 00:21:26.008289 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:26.008470 kubelet[2083]: E0906 00:21:26.008358 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960"} Sep 6 00:21:26.008470 kubelet[2083]: E0906 00:21:26.008402 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46a258a2-0c86-4099-9cef-acdc41e5ed88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.008470 kubelet[2083]: E0906 00:21:26.008425 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46a258a2-0c86-4099-9cef-acdc41e5ed88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-776955cbdc-vpgd2" podUID="46a258a2-0c86-4099-9cef-acdc41e5ed88" Sep 6 00:21:26.012793 env[1286]: time="2025-09-06T00:21:26.012732651Z" level=error msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" failed" error="failed to destroy network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.013361 kubelet[2083]: E0906 00:21:26.013195 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:26.013361 kubelet[2083]: E0906 00:21:26.013249 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c"} Sep 6 00:21:26.013361 kubelet[2083]: E0906 00:21:26.013285 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fe05095-1fb2-4aab-8036-ef685518a4c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.013361 kubelet[2083]: E0906 00:21:26.013308 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fe05095-1fb2-4aab-8036-ef685518a4c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4vsk6" podUID="1fe05095-1fb2-4aab-8036-ef685518a4c9" Sep 6 00:21:26.027487 env[1286]: time="2025-09-06T00:21:26.027427614Z" level=error msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" failed" error="failed to destroy network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.028523 kubelet[2083]: E0906 00:21:26.028362 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:26.028523 kubelet[2083]: E0906 00:21:26.028414 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00"} Sep 6 00:21:26.028523 kubelet[2083]: E0906 00:21:26.028455 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b8e70a2-2b3d-4784-830f-5560042446fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.028523 kubelet[2083]: E0906 00:21:26.028480 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b8e70a2-2b3d-4784-830f-5560042446fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" podUID="6b8e70a2-2b3d-4784-830f-5560042446fa" Sep 6 00:21:26.031398 env[1286]: time="2025-09-06T00:21:26.031345254Z" level=error msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" failed" error="failed to destroy network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.032625 kubelet[2083]: E0906 00:21:26.032418 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:26.032625 kubelet[2083]: E0906 00:21:26.032512 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b"} Sep 6 00:21:26.032625 kubelet[2083]: E0906 00:21:26.032550 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"600d572a-5949-4e09-892e-2ade90d1ec2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.032625 kubelet[2083]: E0906 00:21:26.032574 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"600d572a-5949-4e09-892e-2ade90d1ec2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-m85r4" podUID="600d572a-5949-4e09-892e-2ade90d1ec2c" Sep 6 00:21:26.040612 env[1286]: time="2025-09-06T00:21:26.040530787Z" level=error msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" failed" error="failed to destroy network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:21:26.041430 kubelet[2083]: E0906 00:21:26.041311 2083 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:26.041743 kubelet[2083]: E0906 00:21:26.041647 2083 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0"} Sep 6 00:21:26.042093 kubelet[2083]: E0906 00:21:26.041974 2083 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:21:26.042093 kubelet[2083]: E0906 00:21:26.042025 2083 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" podUID="b20a7ea0-580e-468e-bb6b-4ff362ba7f7c" Sep 6 00:21:33.136789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641177242.mount: Deactivated successfully. Sep 6 00:21:33.164258 env[1286]: time="2025-09-06T00:21:33.164166835Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.165484 env[1286]: time="2025-09-06T00:21:33.165450878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.166677 env[1286]: time="2025-09-06T00:21:33.166647055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.167777 env[1286]: time="2025-09-06T00:21:33.167747817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.168470 env[1286]: time="2025-09-06T00:21:33.168435190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 6 00:21:33.198810 env[1286]: time="2025-09-06T00:21:33.198765950Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 00:21:33.216506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304686184.mount: Deactivated successfully. Sep 6 00:21:33.221222 env[1286]: time="2025-09-06T00:21:33.221165067Z" level=info msg="CreateContainer within sandbox \"7716c61ea1ab10141a5c28886d5923aaff428a877dda0e294dc1ca3a35251c04\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3c188926570fb5fd00ca3c5f4e6cc57434f18ff0fe1a53f63be9611165781e23\"" Sep 6 00:21:33.223935 env[1286]: time="2025-09-06T00:21:33.223892175Z" level=info msg="StartContainer for \"3c188926570fb5fd00ca3c5f4e6cc57434f18ff0fe1a53f63be9611165781e23\"" Sep 6 00:21:33.287458 env[1286]: time="2025-09-06T00:21:33.287396568Z" level=info msg="StartContainer for \"3c188926570fb5fd00ca3c5f4e6cc57434f18ff0fe1a53f63be9611165781e23\" returns successfully" Sep 6 00:21:33.529469 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 00:21:33.529659 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 00:21:33.768686 env[1286]: time="2025-09-06T00:21:33.768641994Z" level=info msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" Sep 6 00:21:33.941042 kubelet[2083]: I0906 00:21:33.936950 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mztrq" podStartSLOduration=1.7465843319999999 podStartE2EDuration="21.924628009s" podCreationTimestamp="2025-09-06 00:21:12 +0000 UTC" firstStartedPulling="2025-09-06 00:21:12.991265141 +0000 UTC m=+20.640501321" lastFinishedPulling="2025-09-06 00:21:33.16930882 +0000 UTC m=+40.818544998" observedRunningTime="2025-09-06 00:21:33.912190388 +0000 UTC m=+41.561426588" watchObservedRunningTime="2025-09-06 00:21:33.924628009 +0000 UTC m=+41.573864208" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.924 [INFO][3270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.928 [INFO][3270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" iface="eth0" netns="/var/run/netns/cni-9cce2922-a78a-2e85-8a99-1623483b2bba" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.929 [INFO][3270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" iface="eth0" netns="/var/run/netns/cni-9cce2922-a78a-2e85-8a99-1623483b2bba" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.930 [INFO][3270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" iface="eth0" netns="/var/run/netns/cni-9cce2922-a78a-2e85-8a99-1623483b2bba" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.930 [INFO][3270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:33.930 [INFO][3270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.131 [INFO][3277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.133 [INFO][3277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.133 [INFO][3277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.150 [WARNING][3277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.150 [INFO][3277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.153 [INFO][3277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:34.158190 env[1286]: 2025-09-06 00:21:34.155 [INFO][3270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:34.161606 systemd[1]: run-netns-cni\x2d9cce2922\x2da78a\x2d2e85\x2d8a99\x2d1623483b2bba.mount: Deactivated successfully. Sep 6 00:21:34.165177 env[1286]: time="2025-09-06T00:21:34.165127040Z" level=info msg="TearDown network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" successfully" Sep 6 00:21:34.165272 env[1286]: time="2025-09-06T00:21:34.165255160Z" level=info msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" returns successfully" Sep 6 00:21:34.240297 kubelet[2083]: I0906 00:21:34.239838 2083 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z56rn\" (UniqueName: \"kubernetes.io/projected/46a258a2-0c86-4099-9cef-acdc41e5ed88-kube-api-access-z56rn\") pod \"46a258a2-0c86-4099-9cef-acdc41e5ed88\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " Sep 6 00:21:34.240658 kubelet[2083]: I0906 00:21:34.240632 2083 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-backend-key-pair\") pod \"46a258a2-0c86-4099-9cef-acdc41e5ed88\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " Sep 6 00:21:34.241046 kubelet[2083]: I0906 00:21:34.240827 2083 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-ca-bundle\") pod \"46a258a2-0c86-4099-9cef-acdc41e5ed88\" (UID: \"46a258a2-0c86-4099-9cef-acdc41e5ed88\") " Sep 6 00:21:34.249567 systemd[1]: var-lib-kubelet-pods-46a258a2\x2d0c86\x2d4099\x2d9cef\x2dacdc41e5ed88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz56rn.mount: Deactivated successfully. Sep 6 00:21:34.251595 kubelet[2083]: I0906 00:21:34.249303 2083 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a258a2-0c86-4099-9cef-acdc41e5ed88-kube-api-access-z56rn" (OuterVolumeSpecName: "kube-api-access-z56rn") pod "46a258a2-0c86-4099-9cef-acdc41e5ed88" (UID: "46a258a2-0c86-4099-9cef-acdc41e5ed88"). InnerVolumeSpecName "kube-api-access-z56rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:21:34.252263 kubelet[2083]: I0906 00:21:34.249083 2083 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "46a258a2-0c86-4099-9cef-acdc41e5ed88" (UID: "46a258a2-0c86-4099-9cef-acdc41e5ed88"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:21:34.258087 systemd[1]: var-lib-kubelet-pods-46a258a2\x2d0c86\x2d4099\x2d9cef\x2dacdc41e5ed88-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 00:21:34.260865 kubelet[2083]: I0906 00:21:34.260805 2083 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "46a258a2-0c86-4099-9cef-acdc41e5ed88" (UID: "46a258a2-0c86-4099-9cef-acdc41e5ed88"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:21:34.342491 kubelet[2083]: I0906 00:21:34.342432 2083 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z56rn\" (UniqueName: \"kubernetes.io/projected/46a258a2-0c86-4099-9cef-acdc41e5ed88-kube-api-access-z56rn\") on node \"ci-3510.3.8-n-0d6cc4df9c\" DevicePath \"\"" Sep 6 00:21:34.342830 kubelet[2083]: I0906 00:21:34.342802 2083 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-0d6cc4df9c\" DevicePath \"\"" Sep 6 00:21:34.342957 kubelet[2083]: I0906 00:21:34.342938 2083 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a258a2-0c86-4099-9cef-acdc41e5ed88-whisker-ca-bundle\") on node \"ci-3510.3.8-n-0d6cc4df9c\" DevicePath \"\"" Sep 6 00:21:34.882131 kubelet[2083]: I0906 00:21:34.882076 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:35.047529 kubelet[2083]: I0906 00:21:35.047449 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ba3913b9-5432-471a-822c-9e385e247f73-whisker-backend-key-pair\") pod \"whisker-bc5f44fd6-c7vdw\" (UID: \"ba3913b9-5432-471a-822c-9e385e247f73\") " pod="calico-system/whisker-bc5f44fd6-c7vdw" Sep 6 00:21:35.047529 kubelet[2083]: I0906 00:21:35.047541 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba3913b9-5432-471a-822c-9e385e247f73-whisker-ca-bundle\") pod \"whisker-bc5f44fd6-c7vdw\" (UID: \"ba3913b9-5432-471a-822c-9e385e247f73\") " pod="calico-system/whisker-bc5f44fd6-c7vdw" Sep 6 00:21:35.048175 kubelet[2083]: I0906 00:21:35.047559 2083 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42km8\" (UniqueName: \"kubernetes.io/projected/ba3913b9-5432-471a-822c-9e385e247f73-kube-api-access-42km8\") pod \"whisker-bc5f44fd6-c7vdw\" (UID: \"ba3913b9-5432-471a-822c-9e385e247f73\") " pod="calico-system/whisker-bc5f44fd6-c7vdw" Sep 6 00:21:35.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-64.227.108.127:22-116.205.240.172:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:35.265819 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 6 00:21:35.265953 kernel: audit: type=1130 audit(1757118095.260:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-64.227.108.127:22-116.205.240.172:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:35.260536 systemd[1]: Started sshd@7-64.227.108.127:22-116.205.240.172:46610.service. Sep 6 00:21:35.268628 env[1286]: time="2025-09-06T00:21:35.265983485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc5f44fd6-c7vdw,Uid:ba3913b9-5432-471a-822c-9e385e247f73,Namespace:calico-system,Attempt:0,}" Sep 6 00:21:35.299000 audit[3338]: AVC avc: denied { write } for pid=3338 comm="tee" name="fd" dev="proc" ino=24344 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.299000 audit[3338]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf5b347d4 a2=241 a3=1b6 items=1 ppid=3308 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.309871 kernel: audit: type=1400 audit(1757118095.299:302): avc: denied { write } for pid=3338 comm="tee" name="fd" dev="proc" ino=24344 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.309948 kernel: audit: type=1300 audit(1757118095.299:302): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf5b347d4 a2=241 a3=1b6 items=1 ppid=3308 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.309975 kernel: audit: type=1307 audit(1757118095.299:302): cwd="/etc/service/enabled/confd/log" Sep 6 00:21:35.299000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 00:21:35.299000 audit: PATH item=0 name="/dev/fd/63" inode=24810 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.313775 kernel: audit: type=1302 audit(1757118095.299:302): item=0 name="/dev/fd/63" inode=24810 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.313848 kernel: audit: type=1327 audit(1757118095.299:302): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.320000 audit[3335]: AVC avc: denied { write } for pid=3335 comm="tee" name="fd" dev="proc" ino=24356 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.320000 audit[3335]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7faba7d4 a2=241 a3=1b6 items=1 ppid=3317 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.332425 kernel: audit: type=1400 audit(1757118095.320:303): avc: denied { write } for pid=3335 comm="tee" name="fd" dev="proc" ino=24356 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.332579 kernel: audit: type=1300 audit(1757118095.320:303): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7faba7d4 a2=241 a3=1b6 items=1 ppid=3317 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.332610 kernel: audit: type=1307 audit(1757118095.320:303): cwd="/etc/service/enabled/bird6/log" Sep 6 00:21:35.320000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 00:21:35.320000 audit: PATH item=0 name="/dev/fd/63" inode=24805 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.336291 kernel: audit: type=1302 audit(1757118095.320:303): item=0 name="/dev/fd/63" inode=24805 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.326000 audit[3350]: AVC avc: denied { write } for pid=3350 comm="tee" name="fd" dev="proc" ino=24360 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.326000 audit[3350]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2ddb17c5 a2=241 a3=1b6 items=1 ppid=3310 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.326000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 00:21:35.326000 audit: PATH item=0 name="/dev/fd/63" inode=24840 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.329000 audit[3348]: AVC avc: denied { write } for pid=3348 comm="tee" name="fd" dev="proc" ino=24367 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.329000 audit[3348]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffea13c77d5 a2=241 a3=1b6 items=1 ppid=3307 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.329000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 00:21:35.329000 audit: PATH item=0 name="/dev/fd/63" inode=24837 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.329000 audit[3367]: AVC avc: denied { write } for pid=3367 comm="tee" name="fd" dev="proc" ino=24371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.329000 audit[3367]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd479947d4 a2=241 a3=1b6 items=1 ppid=3315 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.329000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 00:21:35.329000 audit: PATH item=0 name="/dev/fd/63" inode=24850 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.333000 audit[3370]: AVC avc: denied { write } for pid=3370 comm="tee" name="fd" dev="proc" ino=24375 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.333000 audit[3370]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffda82df7c4 a2=241 a3=1b6 items=1 ppid=3321 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.333000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 00:21:35.333000 audit: PATH item=0 name="/dev/fd/63" inode=24355 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.357000 audit[3386]: AVC avc: denied { write } for pid=3386 comm="tee" name="fd" dev="proc" ino=24383 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:21:35.357000 audit[3386]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd04dba7d6 a2=241 a3=1b6 items=1 ppid=3319 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:35.357000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 00:21:35.357000 audit: PATH item=0 name="/dev/fd/63" inode=24364 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:35.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:21:35.628680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:35.628865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali087274c7cb7: link becomes ready Sep 6 00:21:35.635219 systemd-networkd[1050]: cali087274c7cb7: Link UP Sep 6 00:21:35.635825 systemd-networkd[1050]: cali087274c7cb7: Gained carrier Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.416 [INFO][3355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.462 [INFO][3355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0 whisker-bc5f44fd6- calico-system ba3913b9-5432-471a-822c-9e385e247f73 956 0 2025-09-06 00:21:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bc5f44fd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c whisker-bc5f44fd6-c7vdw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali087274c7cb7 [] [] }} ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.462 [INFO][3355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.534 [INFO][3400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" HandleID="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.534 [INFO][3400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" HandleID="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"whisker-bc5f44fd6-c7vdw", "timestamp":"2025-09-06 00:21:35.533953703 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.535 [INFO][3400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.535 [INFO][3400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.535 [INFO][3400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.544 [INFO][3400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.558 [INFO][3400] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.566 [INFO][3400] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.569 [INFO][3400] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.572 [INFO][3400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.572 [INFO][3400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.575 [INFO][3400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.593 [INFO][3400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.603 [INFO][3400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.193/26] block=192.168.86.192/26 handle="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.603 [INFO][3400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.193/26] handle="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.603 [INFO][3400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:35.675321 env[1286]: 2025-09-06 00:21:35.603 [INFO][3400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.193/26] IPv6=[] ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" HandleID="k8s-pod-network.2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.607 [INFO][3355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0", GenerateName:"whisker-bc5f44fd6-", Namespace:"calico-system", SelfLink:"", UID:"ba3913b9-5432-471a-822c-9e385e247f73", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc5f44fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"whisker-bc5f44fd6-c7vdw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali087274c7cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.607 [INFO][3355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.193/32] ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.607 [INFO][3355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali087274c7cb7 ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.630 [INFO][3355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.630 [INFO][3355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0", GenerateName:"whisker-bc5f44fd6-", Namespace:"calico-system", SelfLink:"", UID:"ba3913b9-5432-471a-822c-9e385e247f73", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc5f44fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df", Pod:"whisker-bc5f44fd6-c7vdw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali087274c7cb7", MAC:"9a:dc:05:04:33:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:35.676417 env[1286]: 2025-09-06 00:21:35.658 [INFO][3355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df" Namespace="calico-system" Pod="whisker-bc5f44fd6-c7vdw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--bc5f44fd6--c7vdw-eth0" Sep 6 00:21:35.713748 env[1286]: time="2025-09-06T00:21:35.713460113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:35.713748 env[1286]: time="2025-09-06T00:21:35.713508324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:35.713748 env[1286]: time="2025-09-06T00:21:35.713521348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:35.714042 env[1286]: time="2025-09-06T00:21:35.713703597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df pid=3425 runtime=io.containerd.runc.v2 Sep 6 00:21:35.824016 env[1286]: time="2025-09-06T00:21:35.823956922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc5f44fd6-c7vdw,Uid:ba3913b9-5432-471a-822c-9e385e247f73,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df\"" Sep 6 00:21:35.830360 env[1286]: time="2025-09-06T00:21:35.830308383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 00:21:36.209762 sshd[3342]: Invalid user bash from 116.205.240.172 port 46610 Sep 6 00:21:36.219659 sshd[3342]: pam_faillock(sshd:auth): User unknown Sep 6 00:21:36.220982 sshd[3342]: pam_unix(sshd:auth): check pass; user unknown Sep 6 00:21:36.221156 sshd[3342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.205.240.172 Sep 6 00:21:36.221891 sshd[3342]: pam_faillock(sshd:auth): User unknown Sep 6 00:21:36.221000 audit[3342]: USER_AUTH pid=3342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="bash" exe="/usr/sbin/sshd" hostname=116.205.240.172 addr=116.205.240.172 terminal=ssh res=failed' Sep 6 00:21:36.614561 kubelet[2083]: I0906 00:21:36.614507 2083 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a258a2-0c86-4099-9cef-acdc41e5ed88" path="/var/lib/kubelet/pods/46a258a2-0c86-4099-9cef-acdc41e5ed88/volumes" Sep 6 00:21:37.380682 env[1286]: time="2025-09-06T00:21:37.380632664Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:37.383027 env[1286]: time="2025-09-06T00:21:37.382973082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:37.385109 env[1286]: time="2025-09-06T00:21:37.385070350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:37.387171 env[1286]: time="2025-09-06T00:21:37.387123839Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:37.388232 env[1286]: time="2025-09-06T00:21:37.388197788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 6 00:21:37.392574 env[1286]: time="2025-09-06T00:21:37.392533516Z" level=info msg="CreateContainer within sandbox \"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 00:21:37.404384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226439343.mount: Deactivated successfully. Sep 6 00:21:37.413063 env[1286]: time="2025-09-06T00:21:37.413014946Z" level=info msg="CreateContainer within sandbox \"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"32b1871f6fa49a3fb1d4b9d75cadf94751ddadce06034bb0df212c3f6d62fe76\"" Sep 6 00:21:37.415160 env[1286]: time="2025-09-06T00:21:37.414895316Z" level=info msg="StartContainer for \"32b1871f6fa49a3fb1d4b9d75cadf94751ddadce06034bb0df212c3f6d62fe76\"" Sep 6 00:21:37.497377 env[1286]: time="2025-09-06T00:21:37.497329392Z" level=info msg="StartContainer for \"32b1871f6fa49a3fb1d4b9d75cadf94751ddadce06034bb0df212c3f6d62fe76\" returns successfully" Sep 6 00:21:37.501334 env[1286]: time="2025-09-06T00:21:37.501296554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 00:21:37.613022 env[1286]: time="2025-09-06T00:21:37.612168343Z" level=info msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" Sep 6 00:21:37.629970 systemd-networkd[1050]: cali087274c7cb7: Gained IPv6LL Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.682 [INFO][3532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.683 [INFO][3532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" iface="eth0" netns="/var/run/netns/cni-58fe9791-3480-9608-d81d-988dd34c19ce" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.683 [INFO][3532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" iface="eth0" netns="/var/run/netns/cni-58fe9791-3480-9608-d81d-988dd34c19ce" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.683 [INFO][3532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" iface="eth0" netns="/var/run/netns/cni-58fe9791-3480-9608-d81d-988dd34c19ce" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.683 [INFO][3532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.683 [INFO][3532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.726 [INFO][3539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.726 [INFO][3539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.726 [INFO][3539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.734 [WARNING][3539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.734 [INFO][3539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.737 [INFO][3539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:37.742385 env[1286]: 2025-09-06 00:21:37.739 [INFO][3532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:37.749132 systemd[1]: run-netns-cni\x2d58fe9791\x2d3480\x2d9608\x2dd81d\x2d988dd34c19ce.mount: Deactivated successfully. Sep 6 00:21:37.750563 env[1286]: time="2025-09-06T00:21:37.750500047Z" level=info msg="TearDown network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" successfully" Sep 6 00:21:37.750563 env[1286]: time="2025-09-06T00:21:37.750559757Z" level=info msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" returns successfully" Sep 6 00:21:37.751855 env[1286]: time="2025-09-06T00:21:37.751813195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb788675f-mrpct,Uid:319af4ab-7a09-4b77-906d-ff7b3b6e2b69,Namespace:calico-system,Attempt:1,}" Sep 6 00:21:37.931445 systemd-networkd[1050]: cali5547e40703c: Link UP Sep 6 00:21:37.937061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:37.937195 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5547e40703c: link becomes ready Sep 6 00:21:37.936826 systemd-networkd[1050]: cali5547e40703c: Gained carrier Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.799 [INFO][3545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.812 [INFO][3545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0 calico-kube-controllers-7bb788675f- calico-system 319af4ab-7a09-4b77-906d-ff7b3b6e2b69 970 0 2025-09-06 00:21:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bb788675f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c calico-kube-controllers-7bb788675f-mrpct eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5547e40703c [] [] }} ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.812 [INFO][3545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.881 [INFO][3564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" HandleID="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.882 [INFO][3564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" HandleID="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"calico-kube-controllers-7bb788675f-mrpct", "timestamp":"2025-09-06 00:21:37.881673885 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.882 [INFO][3564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.882 [INFO][3564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.882 [INFO][3564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.892 [INFO][3564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.897 [INFO][3564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.902 [INFO][3564] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.905 [INFO][3564] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.909 [INFO][3564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.909 [INFO][3564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.911 [INFO][3564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.916 [INFO][3564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.923 [INFO][3564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.194/26] block=192.168.86.192/26 handle="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.923 [INFO][3564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.194/26] handle="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.923 [INFO][3564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:37.971109 env[1286]: 2025-09-06 00:21:37.923 [INFO][3564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.194/26] IPv6=[] ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" HandleID="k8s-pod-network.f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.925 [INFO][3545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0", GenerateName:"calico-kube-controllers-7bb788675f-", Namespace:"calico-system", SelfLink:"", UID:"319af4ab-7a09-4b77-906d-ff7b3b6e2b69", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb788675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"calico-kube-controllers-7bb788675f-mrpct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5547e40703c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.926 [INFO][3545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.194/32] ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.926 [INFO][3545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5547e40703c ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.938 [INFO][3545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.939 [INFO][3545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0", GenerateName:"calico-kube-controllers-7bb788675f-", Namespace:"calico-system", SelfLink:"", UID:"319af4ab-7a09-4b77-906d-ff7b3b6e2b69", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb788675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e", Pod:"calico-kube-controllers-7bb788675f-mrpct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5547e40703c", MAC:"a6:6f:bd:74:46:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:37.971938 env[1286]: 2025-09-06 00:21:37.955 [INFO][3545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e" Namespace="calico-system" Pod="calico-kube-controllers-7bb788675f-mrpct" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:37.989232 sshd[3342]: Failed password for invalid user bash from 116.205.240.172 port 46610 ssh2 Sep 6 00:21:37.992534 env[1286]: time="2025-09-06T00:21:37.990773267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:37.992534 env[1286]: time="2025-09-06T00:21:37.990876025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:37.992534 env[1286]: time="2025-09-06T00:21:37.990899905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:37.992534 env[1286]: time="2025-09-06T00:21:37.991057400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e pid=3597 runtime=io.containerd.runc.v2 Sep 6 00:21:38.096808 env[1286]: time="2025-09-06T00:21:38.096743769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb788675f-mrpct,Uid:319af4ab-7a09-4b77-906d-ff7b3b6e2b69,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e\"" Sep 6 00:21:38.612117 env[1286]: time="2025-09-06T00:21:38.612035663Z" level=info msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" Sep 6 00:21:38.613810 env[1286]: time="2025-09-06T00:21:38.613768337Z" level=info msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" Sep 6 00:21:38.614104 env[1286]: time="2025-09-06T00:21:38.614069937Z" level=info msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.692 [INFO][3666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.693 [INFO][3666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" iface="eth0" netns="/var/run/netns/cni-c11f71b7-4ee7-771f-082d-70c6a9af76e8" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.693 [INFO][3666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" iface="eth0" netns="/var/run/netns/cni-c11f71b7-4ee7-771f-082d-70c6a9af76e8" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.694 [INFO][3666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" iface="eth0" netns="/var/run/netns/cni-c11f71b7-4ee7-771f-082d-70c6a9af76e8" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.694 [INFO][3666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.694 [INFO][3666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.795 [INFO][3682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.798 [INFO][3682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.798 [INFO][3682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.808 [WARNING][3682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.808 [INFO][3682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.812 [INFO][3682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:38.817079 env[1286]: 2025-09-06 00:21:38.815 [INFO][3666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:38.824563 systemd[1]: run-netns-cni\x2dc11f71b7\x2d4ee7\x2d771f\x2d082d\x2d70c6a9af76e8.mount: Deactivated successfully. Sep 6 00:21:38.828150 env[1286]: time="2025-09-06T00:21:38.826770863Z" level=info msg="TearDown network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" successfully" Sep 6 00:21:38.828150 env[1286]: time="2025-09-06T00:21:38.826821778Z" level=info msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" returns successfully" Sep 6 00:21:38.829393 kubelet[2083]: E0906 00:21:38.828695 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:38.833750 env[1286]: time="2025-09-06T00:21:38.833650770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wghq8,Uid:68db3c5f-2540-494b-b7e8-8a05c1287332,Namespace:kube-system,Attempt:1,}" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.718 [INFO][3660] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.718 [INFO][3660] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" iface="eth0" netns="/var/run/netns/cni-ba5a4a30-fd01-6ab6-304e-00628595f47b" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.718 [INFO][3660] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" iface="eth0" netns="/var/run/netns/cni-ba5a4a30-fd01-6ab6-304e-00628595f47b" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.719 [INFO][3660] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" iface="eth0" netns="/var/run/netns/cni-ba5a4a30-fd01-6ab6-304e-00628595f47b" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.719 [INFO][3660] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.719 [INFO][3660] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.822 [INFO][3688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.824 [INFO][3688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.824 [INFO][3688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.833 [WARNING][3688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.834 [INFO][3688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.840 [INFO][3688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:38.878856 env[1286]: 2025-09-06 00:21:38.847 [INFO][3660] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:38.879577 env[1286]: time="2025-09-06T00:21:38.879524480Z" level=info msg="TearDown network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" successfully" Sep 6 00:21:38.879697 env[1286]: time="2025-09-06T00:21:38.879678383Z" level=info msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" returns successfully" Sep 6 00:21:38.881353 kubelet[2083]: E0906 00:21:38.880351 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:38.890637 systemd[1]: run-netns-cni\x2dba5a4a30\x2dfd01\x2d6ab6\x2d304e\x2d00628595f47b.mount: Deactivated successfully. Sep 6 00:21:38.892988 env[1286]: time="2025-09-06T00:21:38.892551440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vsk6,Uid:1fe05095-1fb2-4aab-8036-ef685518a4c9,Namespace:kube-system,Attempt:1,}" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.768 [INFO][3665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.768 [INFO][3665] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" iface="eth0" netns="/var/run/netns/cni-f62505a9-f5fc-0495-de4c-1066177eb5bd" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.768 [INFO][3665] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" iface="eth0" netns="/var/run/netns/cni-f62505a9-f5fc-0495-de4c-1066177eb5bd" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.769 [INFO][3665] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" iface="eth0" netns="/var/run/netns/cni-f62505a9-f5fc-0495-de4c-1066177eb5bd" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.769 [INFO][3665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.769 [INFO][3665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.906 [INFO][3695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.907 [INFO][3695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.907 [INFO][3695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.918 [WARNING][3695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.918 [INFO][3695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.920 [INFO][3695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:38.937568 env[1286]: 2025-09-06 00:21:38.930 [INFO][3665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:38.939456 env[1286]: time="2025-09-06T00:21:38.939382025Z" level=info msg="TearDown network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" successfully" Sep 6 00:21:38.939689 env[1286]: time="2025-09-06T00:21:38.939639379Z" level=info msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" returns successfully" Sep 6 00:21:38.955164 env[1286]: time="2025-09-06T00:21:38.955106190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-xjcwv,Uid:b20a7ea0-580e-468e-bb6b-4ff362ba7f7c,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:21:39.038073 systemd-networkd[1050]: cali5547e40703c: Gained IPv6LL Sep 6 00:21:39.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-64.227.108.127:22-116.205.240.172:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:39.082993 sshd[3342]: Received disconnect from 116.205.240.172 port 46610:11: Bye Bye [preauth] Sep 6 00:21:39.082993 sshd[3342]: Disconnected from invalid user bash 116.205.240.172 port 46610 [preauth] Sep 6 00:21:39.076317 systemd[1]: sshd@7-64.227.108.127:22-116.205.240.172:46610.service: Deactivated successfully. Sep 6 00:21:39.310259 systemd-networkd[1050]: cali294bfc57275: Link UP Sep 6 00:21:39.338492 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:39.338694 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali294bfc57275: link becomes ready Sep 6 00:21:39.338963 systemd-networkd[1050]: cali294bfc57275: Gained carrier Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:38.967 [INFO][3701] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:38.983 [INFO][3701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0 coredns-7c65d6cfc9- kube-system 68db3c5f-2540-494b-b7e8-8a05c1287332 979 0 2025-09-06 00:20:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c coredns-7c65d6cfc9-wghq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali294bfc57275 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:38.983 [INFO][3701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.186 [INFO][3730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" HandleID="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.186 [INFO][3730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" HandleID="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"coredns-7c65d6cfc9-wghq8", "timestamp":"2025-09-06 00:21:39.186479069 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.186 [INFO][3730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.194 [INFO][3730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.194 [INFO][3730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.213 [INFO][3730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.228 [INFO][3730] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.264 [INFO][3730] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.274 [INFO][3730] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.278 [INFO][3730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.278 [INFO][3730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.281 [INFO][3730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.287 [INFO][3730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.296 [INFO][3730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.195/26] block=192.168.86.192/26 handle="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.297 [INFO][3730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.195/26] handle="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.297 [INFO][3730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:39.375953 env[1286]: 2025-09-06 00:21:39.297 [INFO][3730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.195/26] IPv6=[] ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" HandleID="k8s-pod-network.c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.303 [INFO][3701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68db3c5f-2540-494b-b7e8-8a05c1287332", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"coredns-7c65d6cfc9-wghq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali294bfc57275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.304 [INFO][3701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.195/32] ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.304 [INFO][3701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali294bfc57275 ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.343 [INFO][3701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.348 [INFO][3701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68db3c5f-2540-494b-b7e8-8a05c1287332", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc", Pod:"coredns-7c65d6cfc9-wghq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali294bfc57275", MAC:"8a:a3:fe:68:0a:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.377206 env[1286]: 2025-09-06 00:21:39.369 [INFO][3701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wghq8" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:39.410359 systemd[1]: run-netns-cni\x2df62505a9\x2df5fc\x2d0495\x2dde4c\x2d1066177eb5bd.mount: Deactivated successfully. Sep 6 00:21:39.472804 env[1286]: time="2025-09-06T00:21:39.472613890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:39.472804 env[1286]: time="2025-09-06T00:21:39.472661376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:39.472804 env[1286]: time="2025-09-06T00:21:39.472672764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:39.496809 env[1286]: time="2025-09-06T00:21:39.473161088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc pid=3792 runtime=io.containerd.runc.v2 Sep 6 00:21:39.503537 systemd-networkd[1050]: calic6ff154353e: Link UP Sep 6 00:21:39.513297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic6ff154353e: link becomes ready Sep 6 00:21:39.512454 systemd-networkd[1050]: calic6ff154353e: Gained carrier Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.152 [INFO][3713] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.192 [INFO][3713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0 coredns-7c65d6cfc9- kube-system 1fe05095-1fb2-4aab-8036-ef685518a4c9 980 0 2025-09-06 00:20:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c coredns-7c65d6cfc9-4vsk6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6ff154353e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.192 [INFO][3713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.396 [INFO][3765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" HandleID="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.396 [INFO][3765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" HandleID="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"coredns-7c65d6cfc9-4vsk6", "timestamp":"2025-09-06 00:21:39.396367564 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.397 [INFO][3765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.407 [INFO][3765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.407 [INFO][3765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.427 [INFO][3765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.434 [INFO][3765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.444 [INFO][3765] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.447 [INFO][3765] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.451 [INFO][3765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.451 [INFO][3765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.453 [INFO][3765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1 Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.459 [INFO][3765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.467 [INFO][3765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.196/26] block=192.168.86.192/26 handle="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.467 [INFO][3765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.196/26] handle="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.467 [INFO][3765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:39.537931 env[1286]: 2025-09-06 00:21:39.467 [INFO][3765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.196/26] IPv6=[] ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" HandleID="k8s-pod-network.3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.500 [INFO][3713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1fe05095-1fb2-4aab-8036-ef685518a4c9", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"coredns-7c65d6cfc9-4vsk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6ff154353e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.500 [INFO][3713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.196/32] ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.500 [INFO][3713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6ff154353e ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.514 [INFO][3713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.514 [INFO][3713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1fe05095-1fb2-4aab-8036-ef685518a4c9", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1", Pod:"coredns-7c65d6cfc9-4vsk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6ff154353e", MAC:"12:ed:a6:f1:c6:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.539898 env[1286]: 2025-09-06 00:21:39.530 [INFO][3713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4vsk6" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:39.600823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3e4648bbb28: link becomes ready Sep 6 00:21:39.600066 systemd-networkd[1050]: cali3e4648bbb28: Link UP Sep 6 00:21:39.600467 systemd-networkd[1050]: cali3e4648bbb28: Gained carrier Sep 6 00:21:39.616783 env[1286]: time="2025-09-06T00:21:39.612457286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:39.616783 env[1286]: time="2025-09-06T00:21:39.612533669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:39.616783 env[1286]: time="2025-09-06T00:21:39.612545229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:39.619870 env[1286]: time="2025-09-06T00:21:39.619830042Z" level=info msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.141 [INFO][3725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.188 [INFO][3725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0 calico-apiserver-69b54c6ffc- calico-apiserver b20a7ea0-580e-468e-bb6b-4ff362ba7f7c 981 0 2025-09-06 00:21:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b54c6ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c calico-apiserver-69b54c6ffc-xjcwv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e4648bbb28 [] [] }} ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.188 [INFO][3725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.396 [INFO][3764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" HandleID="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.397 [INFO][3764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" HandleID="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cda10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"calico-apiserver-69b54c6ffc-xjcwv", "timestamp":"2025-09-06 00:21:39.396238183 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.406 [INFO][3764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.468 [INFO][3764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.468 [INFO][3764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.529 [INFO][3764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.536 [INFO][3764] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.543 [INFO][3764] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.546 [INFO][3764] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.548 [INFO][3764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.549 [INFO][3764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.551 [INFO][3764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.557 [INFO][3764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.564 [INFO][3764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.197/26] block=192.168.86.192/26 handle="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.564 [INFO][3764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.197/26] handle="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.564 [INFO][3764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:39.645924 env[1286]: 2025-09-06 00:21:39.564 [INFO][3764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.197/26] IPv6=[] ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" HandleID="k8s-pod-network.4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.568 [INFO][3725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"calico-apiserver-69b54c6ffc-xjcwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e4648bbb28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.568 [INFO][3725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.197/32] ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.568 [INFO][3725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e4648bbb28 ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.600 [INFO][3725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.600 [INFO][3725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd", Pod:"calico-apiserver-69b54c6ffc-xjcwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e4648bbb28", MAC:"7a:5d:0a:d3:fb:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:39.646993 env[1286]: 2025-09-06 00:21:39.618 [INFO][3725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-xjcwv" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:39.655909 env[1286]: time="2025-09-06T00:21:39.622492113Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1 pid=3837 runtime=io.containerd.runc.v2 Sep 6 00:21:39.706518 env[1286]: time="2025-09-06T00:21:39.706472603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wghq8,Uid:68db3c5f-2540-494b-b7e8-8a05c1287332,Namespace:kube-system,Attempt:1,} returns sandbox id \"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc\"" Sep 6 00:21:39.713206 kubelet[2083]: E0906 00:21:39.713175 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:39.717162 env[1286]: time="2025-09-06T00:21:39.717122468Z" level=info msg="CreateContainer within sandbox \"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:39.724687 env[1286]: time="2025-09-06T00:21:39.724607658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:39.724949 env[1286]: time="2025-09-06T00:21:39.724651250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:39.724949 env[1286]: time="2025-09-06T00:21:39.724683718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:39.724949 env[1286]: time="2025-09-06T00:21:39.724860223Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd pid=3896 runtime=io.containerd.runc.v2 Sep 6 00:21:39.732294 env[1286]: time="2025-09-06T00:21:39.732249893Z" level=info msg="CreateContainer within sandbox \"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1006ce02edf2b747f66bf80f1161abf80e1605affdac2e01ee032a4a327df685\"" Sep 6 00:21:39.734877 env[1286]: time="2025-09-06T00:21:39.734835466Z" level=info msg="StartContainer for \"1006ce02edf2b747f66bf80f1161abf80e1605affdac2e01ee032a4a327df685\"" Sep 6 00:21:39.809575 env[1286]: time="2025-09-06T00:21:39.809527774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4vsk6,Uid:1fe05095-1fb2-4aab-8036-ef685518a4c9,Namespace:kube-system,Attempt:1,} returns sandbox id \"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1\"" Sep 6 00:21:39.810317 kubelet[2083]: E0906 00:21:39.810290 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:39.858392 env[1286]: time="2025-09-06T00:21:39.858341675Z" level=info msg="CreateContainer within sandbox \"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:39.896593 env[1286]: time="2025-09-06T00:21:39.896454952Z" level=info msg="CreateContainer within sandbox \"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fc77040d92334b1a5fee77514040744860e74e6eeb2f692a14577a20882a7bf\"" Sep 6 00:21:39.899449 env[1286]: time="2025-09-06T00:21:39.899406554Z" level=info msg="StartContainer for \"2fc77040d92334b1a5fee77514040744860e74e6eeb2f692a14577a20882a7bf\"" Sep 6 00:21:39.960137 env[1286]: time="2025-09-06T00:21:39.959982634Z" level=info msg="StartContainer for \"1006ce02edf2b747f66bf80f1161abf80e1605affdac2e01ee032a4a327df685\" returns successfully" Sep 6 00:21:39.980206 env[1286]: time="2025-09-06T00:21:39.980145266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-xjcwv,Uid:b20a7ea0-580e-468e-bb6b-4ff362ba7f7c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd\"" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.807 [INFO][3871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.807 [INFO][3871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" iface="eth0" netns="/var/run/netns/cni-b1462364-7a14-4f6c-0710-db58419219bc" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.808 [INFO][3871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" iface="eth0" netns="/var/run/netns/cni-b1462364-7a14-4f6c-0710-db58419219bc" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.808 [INFO][3871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" iface="eth0" netns="/var/run/netns/cni-b1462364-7a14-4f6c-0710-db58419219bc" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.808 [INFO][3871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:39.808 [INFO][3871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.057 [INFO][3950] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.057 [INFO][3950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.057 [INFO][3950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.066 [WARNING][3950] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.066 [INFO][3950] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.070 [INFO][3950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:40.085430 env[1286]: 2025-09-06 00:21:40.075 [INFO][3871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:40.087055 env[1286]: time="2025-09-06T00:21:40.087004106Z" level=info msg="TearDown network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" successfully" Sep 6 00:21:40.087279 env[1286]: time="2025-09-06T00:21:40.087254344Z" level=info msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" returns successfully" Sep 6 00:21:40.088496 env[1286]: time="2025-09-06T00:21:40.088355260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sldlw,Uid:9086b069-766b-4d15-aa22-c0caba04aa75,Namespace:calico-system,Attempt:1,}" Sep 6 00:21:40.159585 env[1286]: time="2025-09-06T00:21:40.158887621Z" level=info msg="StartContainer for \"2fc77040d92334b1a5fee77514040744860e74e6eeb2f692a14577a20882a7bf\" returns successfully" Sep 6 00:21:40.405944 systemd[1]: run-netns-cni\x2db1462364\x2d7a14\x2d4f6c\x2d0710\x2ddb58419219bc.mount: Deactivated successfully. Sep 6 00:21:40.419783 systemd-networkd[1050]: cali59749933b4d: Link UP Sep 6 00:21:40.422412 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:40.422515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali59749933b4d: link becomes ready Sep 6 00:21:40.431620 systemd-networkd[1050]: cali59749933b4d: Gained carrier Sep 6 00:21:40.445993 systemd-networkd[1050]: cali294bfc57275: Gained IPv6LL Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.173 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.210 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0 csi-node-driver- calico-system 9086b069-766b-4d15-aa22-c0caba04aa75 1000 0 2025-09-06 00:21:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c csi-node-driver-sldlw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali59749933b4d [] [] }} ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.210 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.314 [INFO][4033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" HandleID="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.315 [INFO][4033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" HandleID="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"csi-node-driver-sldlw", "timestamp":"2025-09-06 00:21:40.314948357 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.315 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.315 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.315 [INFO][4033] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.333 [INFO][4033] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.344 [INFO][4033] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.361 [INFO][4033] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.364 [INFO][4033] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.373 [INFO][4033] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.373 [INFO][4033] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.378 [INFO][4033] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808 Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.392 [INFO][4033] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.412 [INFO][4033] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.198/26] block=192.168.86.192/26 handle="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.412 [INFO][4033] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.198/26] handle="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.412 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:40.458328 env[1286]: 2025-09-06 00:21:40.412 [INFO][4033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.198/26] IPv6=[] ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" HandleID="k8s-pod-network.d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.417 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9086b069-766b-4d15-aa22-c0caba04aa75", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"csi-node-driver-sldlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59749933b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.417 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.198/32] ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.417 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59749933b4d ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.432 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.433 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9086b069-766b-4d15-aa22-c0caba04aa75", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808", Pod:"csi-node-driver-sldlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59749933b4d", MAC:"8a:68:df:88:9f:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:40.459455 env[1286]: 2025-09-06 00:21:40.445 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808" Namespace="calico-system" Pod="csi-node-driver-sldlw" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:40.530800 env[1286]: time="2025-09-06T00:21:40.528494156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:40.530800 env[1286]: time="2025-09-06T00:21:40.528550939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:40.530800 env[1286]: time="2025-09-06T00:21:40.528561999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:40.530800 env[1286]: time="2025-09-06T00:21:40.528752072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808 pid=4066 runtime=io.containerd.runc.v2 Sep 6 00:21:40.570018 systemd[1]: run-containerd-runc-k8s.io-d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808-runc.5pKT2Y.mount: Deactivated successfully. Sep 6 00:21:40.641402 env[1286]: time="2025-09-06T00:21:40.641358212Z" level=info msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" Sep 6 00:21:40.667015 env[1286]: time="2025-09-06T00:21:40.666971115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sldlw,Uid:9086b069-766b-4d15-aa22-c0caba04aa75,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808\"" Sep 6 00:21:40.719916 kubelet[2083]: I0906 00:21:40.715666 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" iface="eth0" netns="/var/run/netns/cni-904b9f28-d110-09b7-2a63-04cad98d27a0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" iface="eth0" netns="/var/run/netns/cni-904b9f28-d110-09b7-2a63-04cad98d27a0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" iface="eth0" netns="/var/run/netns/cni-904b9f28-d110-09b7-2a63-04cad98d27a0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.784 [INFO][4112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.853 [INFO][4129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.853 [INFO][4129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.853 [INFO][4129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.877 [WARNING][4129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.877 [INFO][4129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.879 [INFO][4129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:40.886369 env[1286]: 2025-09-06 00:21:40.882 [INFO][4112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:40.887080 env[1286]: time="2025-09-06T00:21:40.887038717Z" level=info msg="TearDown network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" successfully" Sep 6 00:21:40.887175 env[1286]: time="2025-09-06T00:21:40.887157431Z" level=info msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" returns successfully" Sep 6 00:21:40.888019 env[1286]: time="2025-09-06T00:21:40.887989128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-ffpz9,Uid:6b8e70a2-2b3d-4784-830f-5560042446fa,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:21:40.955369 kubelet[2083]: E0906 00:21:40.955003 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:40.961342 kubelet[2083]: E0906 00:21:40.961314 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:40.973374 kubelet[2083]: I0906 00:21:40.972202 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wghq8" podStartSLOduration=42.972173073 podStartE2EDuration="42.972173073s" podCreationTimestamp="2025-09-06 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:40.971483077 +0000 UTC m=+48.620719277" watchObservedRunningTime="2025-09-06 00:21:40.972173073 +0000 UTC m=+48.621409273" Sep 6 00:21:41.132827 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 6 00:21:41.133001 kernel: audit: type=1325 audit(1757118101.128:311): table=filter:99 family=2 entries=19 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.128000 audit[4171]: NETFILTER_CFG table=filter:99 family=2 entries=19 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.138398 kernel: audit: type=1300 audit(1757118101.128:311): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb505cc50 a2=0 a3=7ffeb505cc3c items=0 ppid=2208 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.138511 kernel: audit: type=1327 audit(1757118101.128:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.128000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb505cc50 a2=0 a3=7ffeb505cc3c items=0 ppid=2208 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.155218 systemd-networkd[1050]: cali3e4648bbb28: Gained IPv6LL Sep 6 00:21:41.172000 audit[4171]: NETFILTER_CFG table=nat:100 family=2 entries=45 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.172000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffeb505cc50 a2=0 a3=7ffeb505cc3c items=0 ppid=2208 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.174798 kernel: audit: type=1325 audit(1757118101.172:312): table=nat:100 family=2 entries=45 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.174862 kernel: audit: type=1300 audit(1757118101.172:312): arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffeb505cc50 a2=0 a3=7ffeb505cc3c items=0 ppid=2208 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.174886 kernel: audit: type=1327 audit(1757118101.172:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.244066 kernel: audit: type=1325 audit(1757118101.235:313): table=filter:101 family=2 entries=16 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.244221 kernel: audit: type=1300 audit(1757118101.235:313): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd94802350 a2=0 a3=7ffd9480233c items=0 ppid=2208 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.244251 kernel: audit: type=1327 audit(1757118101.235:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.235000 audit[4182]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.235000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd94802350 a2=0 a3=7ffd9480233c items=0 ppid=2208 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.250734 kernel: audit: type=1325 audit(1757118101.246:314): table=nat:102 family=2 entries=18 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.246000 audit[4182]: NETFILTER_CFG table=nat:102 family=2 entries=18 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:41.246000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffd94802350 a2=0 a3=0 items=0 ppid=2208 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:41.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:41.299684 systemd-networkd[1050]: calib65704dedb1: Link UP Sep 6 00:21:41.302083 systemd-networkd[1050]: calib65704dedb1: Gained carrier Sep 6 00:21:41.303023 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib65704dedb1: link becomes ready Sep 6 00:21:41.323454 kubelet[2083]: I0906 00:21:41.322790 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4vsk6" podStartSLOduration=43.322767773 podStartE2EDuration="43.322767773s" podCreationTimestamp="2025-09-06 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:41.007128935 +0000 UTC m=+48.656365132" watchObservedRunningTime="2025-09-06 00:21:41.322767773 +0000 UTC m=+48.972003975" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.030 [INFO][4145] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.051 [INFO][4145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0 calico-apiserver-69b54c6ffc- calico-apiserver 6b8e70a2-2b3d-4784-830f-5560042446fa 1014 0 2025-09-06 00:21:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b54c6ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c calico-apiserver-69b54c6ffc-ffpz9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib65704dedb1 [] [] }} ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.051 [INFO][4145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.157 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" HandleID="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.158 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" HandleID="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"calico-apiserver-69b54c6ffc-ffpz9", "timestamp":"2025-09-06 00:21:41.15697185 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.158 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.158 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.158 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.208 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.251 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.258 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.261 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.265 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.265 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.269 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19 Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.275 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.289 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.199/26] block=192.168.86.192/26 handle="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.289 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.199/26] handle="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.289 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:41.334411 env[1286]: 2025-09-06 00:21:41.289 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.199/26] IPv6=[] ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" HandleID="k8s-pod-network.b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.296 [INFO][4145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b8e70a2-2b3d-4784-830f-5560042446fa", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"calico-apiserver-69b54c6ffc-ffpz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib65704dedb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.296 [INFO][4145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.199/32] ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.296 [INFO][4145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib65704dedb1 ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.303 [INFO][4145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.309 [INFO][4145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b8e70a2-2b3d-4784-830f-5560042446fa", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19", Pod:"calico-apiserver-69b54c6ffc-ffpz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib65704dedb1", MAC:"92:89:47:44:7a:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:41.335138 env[1286]: 2025-09-06 00:21:41.324 [INFO][4145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19" Namespace="calico-apiserver" Pod="calico-apiserver-69b54c6ffc-ffpz9" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:41.344002 systemd-networkd[1050]: calic6ff154353e: Gained IPv6LL Sep 6 00:21:41.376053 kubelet[2083]: I0906 00:21:41.372610 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:41.376053 kubelet[2083]: E0906 00:21:41.374921 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.406637 systemd[1]: run-containerd-runc-k8s.io-3c188926570fb5fd00ca3c5f4e6cc57434f18ff0fe1a53f63be9611165781e23-runc.lnSgkh.mount: Deactivated successfully. Sep 6 00:21:41.407804 systemd[1]: run-netns-cni\x2d904b9f28\x2dd110\x2d09b7\x2d2a63\x2d04cad98d27a0.mount: Deactivated successfully. Sep 6 00:21:41.412782 env[1286]: time="2025-09-06T00:21:41.412682464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:41.412958 env[1286]: time="2025-09-06T00:21:41.412752900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:41.412958 env[1286]: time="2025-09-06T00:21:41.412770817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:41.413075 env[1286]: time="2025-09-06T00:21:41.412994214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19 pid=4202 runtime=io.containerd.runc.v2 Sep 6 00:21:41.492329 systemd[1]: run-containerd-runc-k8s.io-b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19-runc.MRtJXv.mount: Deactivated successfully. Sep 6 00:21:41.617819 env[1286]: time="2025-09-06T00:21:41.610251908Z" level=info msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" Sep 6 00:21:41.655037 env[1286]: time="2025-09-06T00:21:41.654995969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b54c6ffc-ffpz9,Uid:6b8e70a2-2b3d-4784-830f-5560042446fa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19\"" Sep 6 00:21:41.940398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219595099.mount: Deactivated successfully. Sep 6 00:21:41.982333 kubelet[2083]: E0906 00:21:41.978273 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.982333 kubelet[2083]: E0906 00:21:41.979234 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.982333 kubelet[2083]: E0906 00:21:41.982218 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.983056 env[1286]: time="2025-09-06T00:21:41.979443071Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:41.986023 env[1286]: time="2025-09-06T00:21:41.985693023Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:41.989543 env[1286]: time="2025-09-06T00:21:41.989490568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:41.992986 env[1286]: time="2025-09-06T00:21:41.992252643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:41.995895 env[1286]: time="2025-09-06T00:21:41.995817785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 6 00:21:42.001100 env[1286]: time="2025-09-06T00:21:42.001049127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 00:21:42.008151 env[1286]: time="2025-09-06T00:21:42.008096221Z" level=info msg="CreateContainer within sandbox \"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.832 [INFO][4257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.832 [INFO][4257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" iface="eth0" netns="/var/run/netns/cni-4f0a4b72-e2cf-9e43-3523-63590e35ab82" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.838 [INFO][4257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" iface="eth0" netns="/var/run/netns/cni-4f0a4b72-e2cf-9e43-3523-63590e35ab82" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.840 [INFO][4257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" iface="eth0" netns="/var/run/netns/cni-4f0a4b72-e2cf-9e43-3523-63590e35ab82" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.840 [INFO][4257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.840 [INFO][4257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.993 [INFO][4270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.994 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:41.994 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:42.019 [WARNING][4270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:42.019 [INFO][4270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:42.021 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:42.027655 env[1286]: 2025-09-06 00:21:42.024 [INFO][4257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:42.028802 env[1286]: time="2025-09-06T00:21:42.028749881Z" level=info msg="TearDown network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" successfully" Sep 6 00:21:42.028945 env[1286]: time="2025-09-06T00:21:42.028920976Z" level=info msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" returns successfully" Sep 6 00:21:42.029946 env[1286]: time="2025-09-06T00:21:42.029910849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-m85r4,Uid:600d572a-5949-4e09-892e-2ade90d1ec2c,Namespace:calico-system,Attempt:1,}" Sep 6 00:21:42.035649 env[1286]: time="2025-09-06T00:21:42.035583153Z" level=info msg="CreateContainer within sandbox \"2bf2f102d06fe909b8c4378da9671ca51fbab1b192c11e5ce9c82afd9a49a5df\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6af7e08dd836685fc3103851971d3cda5c4d3f6c85540d95396756a97b6818aa\"" Sep 6 00:21:42.038065 env[1286]: time="2025-09-06T00:21:42.036832346Z" level=info msg="StartContainer for \"6af7e08dd836685fc3103851971d3cda5c4d3f6c85540d95396756a97b6818aa\"" Sep 6 00:21:42.056868 systemd-networkd[1050]: cali59749933b4d: Gained IPv6LL Sep 6 00:21:42.241452 env[1286]: time="2025-09-06T00:21:42.239544609Z" level=info msg="StartContainer for \"6af7e08dd836685fc3103851971d3cda5c4d3f6c85540d95396756a97b6818aa\" returns successfully" Sep 6 00:21:42.290000 audit[4352]: NETFILTER_CFG table=filter:103 family=2 entries=15 op=nft_register_rule pid=4352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:42.290000 audit[4352]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd2c452b0 a2=0 a3=7ffcd2c4529c items=0 ppid=2208 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:42.297000 audit[4352]: NETFILTER_CFG table=nat:104 family=2 entries=25 op=nft_register_chain pid=4352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:42.297000 audit[4352]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffcd2c452b0 a2=0 a3=7ffcd2c4529c items=0 ppid=2208 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:42.376904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:42.377056 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid7faa9f2014: link becomes ready Sep 6 00:21:42.376470 systemd-networkd[1050]: calid7faa9f2014: Link UP Sep 6 00:21:42.376621 systemd-networkd[1050]: calid7faa9f2014: Gained carrier Sep 6 00:21:42.412359 systemd[1]: run-netns-cni\x2d4f0a4b72\x2de2cf\x2d9e43\x2d3523\x2d63590e35ab82.mount: Deactivated successfully. Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.114 [INFO][4303] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.154 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0 goldmane-7988f88666- calico-system 600d572a-5949-4e09-892e-2ade90d1ec2c 1046 0 2025-09-06 00:21:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-0d6cc4df9c goldmane-7988f88666-m85r4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid7faa9f2014 [] [] }} ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.154 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.233 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" HandleID="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.233 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" HandleID="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0d6cc4df9c", "pod":"goldmane-7988f88666-m85r4", "timestamp":"2025-09-06 00:21:42.233540283 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0d6cc4df9c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.234 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.234 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.234 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0d6cc4df9c' Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.244 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.260 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.269 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.291 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.308 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.192/26 host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.308 [INFO][4337] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.86.192/26 handle="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.316 [INFO][4337] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43 Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.337 [INFO][4337] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.86.192/26 handle="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.368 [INFO][4337] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.86.200/26] block=192.168.86.192/26 handle="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.368 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.200/26] handle="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" host="ci-3510.3.8-n-0d6cc4df9c" Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.368 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:42.438234 env[1286]: 2025-09-06 00:21:42.368 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.86.200/26] IPv6=[] ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" HandleID="k8s-pod-network.88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.371 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"600d572a-5949-4e09-892e-2ade90d1ec2c", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"", Pod:"goldmane-7988f88666-m85r4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7faa9f2014", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.371 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.200/32] ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.371 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7faa9f2014 ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.378 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.388 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"600d572a-5949-4e09-892e-2ade90d1ec2c", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43", Pod:"goldmane-7988f88666-m85r4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7faa9f2014", MAC:"aa:c7:d1:d4:7e:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:42.439114 env[1286]: 2025-09-06 00:21:42.435 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43" Namespace="calico-system" Pod="goldmane-7988f88666-m85r4" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:42.462789 env[1286]: time="2025-09-06T00:21:42.462463612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:42.462789 env[1286]: time="2025-09-06T00:21:42.462506455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:42.462789 env[1286]: time="2025-09-06T00:21:42.462517002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:42.462789 env[1286]: time="2025-09-06T00:21:42.462663903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43 pid=4372 runtime=io.containerd.runc.v2 Sep 6 00:21:42.533276 systemd[1]: run-containerd-runc-k8s.io-88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43-runc.QhMgLr.mount: Deactivated successfully. Sep 6 00:21:42.644475 env[1286]: time="2025-09-06T00:21:42.644429716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-m85r4,Uid:600d572a-5949-4e09-892e-2ade90d1ec2c,Namespace:calico-system,Attempt:1,} returns sandbox id \"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43\"" Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.688000 audit: BPF prog-id=10 op=LOAD Sep 6 00:21:42.688000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9876ee0 a2=98 a3=1fffffffffffffff items=0 ppid=4323 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.688000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:21:42.689000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.689000 audit: BPF prog-id=11 op=LOAD Sep 6 00:21:42.689000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9876dc0 a2=94 a3=3 items=0 ppid=4323 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:21:42.690000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit[4415]: AVC avc: denied { bpf } for pid=4415 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.690000 audit: BPF prog-id=12 op=LOAD Sep 6 00:21:42.690000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9876e00 a2=94 a3=7ffcc9876fe0 items=0 ppid=4323 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.690000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:21:42.691000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:21:42.691000 audit[4415]: AVC avc: denied { perfmon } for pid=4415 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.691000 audit[4415]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffcc9876ed0 a2=50 a3=a000000085 items=0 ppid=4323 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.691000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.693000 audit: BPF prog-id=13 op=LOAD Sep 6 00:21:42.693000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc334704a0 a2=98 a3=3 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.693000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit: BPF prog-id=14 op=LOAD Sep 6 00:21:42.694000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc33470290 a2=94 a3=54428f items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.694000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.694000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.694000 audit: BPF prog-id=15 op=LOAD Sep 6 00:21:42.694000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc334702c0 a2=94 a3=2 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.694000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.694000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit: BPF prog-id=16 op=LOAD Sep 6 00:21:42.837000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc33470180 a2=94 a3=1 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.837000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.837000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:21:42.837000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.837000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc33470250 a2=50 a3=7ffc33470330 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.837000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.852000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.852000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc33470190 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.852000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.852000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc334701c0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.852000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.852000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc334700d0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.852000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.852000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc334701e0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc334701c0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc334701b0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc334701e0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc334701c0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc334701e0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc334701b0 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc33470220 a2=28 a3=0 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc3346ffd0 a2=50 a3=1 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit: BPF prog-id=17 op=LOAD Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc3346ffd0 a2=94 a3=5 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc33470080 a2=50 a3=1 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc334701a0 a2=4 a3=38 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { confidentiality } for pid=4416 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc334701f0 a2=94 a3=6 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.853000 audit[4416]: AVC avc: denied { confidentiality } for pid=4416 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:21:42.853000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc3346f9a0 a2=94 a3=88 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { bpf } for pid=4416 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: AVC avc: denied { perfmon } for pid=4416 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.854000 audit[4416]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc3346f9a0 a2=94 a3=88 items=0 ppid=4323 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit: BPF prog-id=18 op=LOAD Sep 6 00:21:42.876000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed942d9e0 a2=98 a3=1999999999999999 items=0 ppid=4323 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.876000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:21:42.876000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit: BPF prog-id=19 op=LOAD Sep 6 00:21:42.876000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed942d8c0 a2=94 a3=ffff items=0 ppid=4323 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.876000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:21:42.876000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { perfmon } for pid=4419 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit[4419]: AVC avc: denied { bpf } for pid=4419 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:42.876000 audit: BPF prog-id=20 op=LOAD Sep 6 00:21:42.876000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed942d900 a2=94 a3=7ffed942dae0 items=0 ppid=4323 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:42.876000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:21:42.877000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:21:42.879419 systemd-networkd[1050]: calib65704dedb1: Gained IPv6LL Sep 6 00:21:42.967434 systemd-networkd[1050]: vxlan.calico: Link UP Sep 6 00:21:42.967442 systemd-networkd[1050]: vxlan.calico: Gained carrier Sep 6 00:21:42.984807 kubelet[2083]: E0906 00:21:42.984614 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.986990 kubelet[2083]: E0906 00:21:42.985295 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.999111 kubelet[2083]: I0906 00:21:42.997871 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bc5f44fd6-c7vdw" podStartSLOduration=2.826436934 podStartE2EDuration="8.997850195s" podCreationTimestamp="2025-09-06 00:21:34 +0000 UTC" firstStartedPulling="2025-09-06 00:21:35.826827189 +0000 UTC m=+43.476063380" lastFinishedPulling="2025-09-06 00:21:41.998240462 +0000 UTC m=+49.647476641" observedRunningTime="2025-09-06 00:21:42.99769518 +0000 UTC m=+50.646931382" watchObservedRunningTime="2025-09-06 00:21:42.997850195 +0000 UTC m=+50.647086395" Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.028000 audit: BPF prog-id=21 op=LOAD Sep 6 00:21:43.028000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd2a49a0 a2=98 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.028000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.028000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit: BPF prog-id=22 op=LOAD Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd2a47b0 a2=94 a3=54428f items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit: BPF prog-id=23 op=LOAD Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddd2a47e0 a2=94 a3=2 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a46b0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffddd2a46e0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffddd2a45f0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a4700 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a46e0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a46d0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a4700 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffddd2a46e0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffddd2a4700 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffddd2a46d0 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffddd2a4740 a2=28 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.029000 audit: BPF prog-id=24 op=LOAD Sep 6 00:21:43.029000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffddd2a45b0 a2=94 a3=0 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.029000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.029000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffddd2a45a0 a2=50 a3=2800 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffddd2a45a0 a2=50 a3=2800 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit: BPF prog-id=25 op=LOAD Sep 6 00:21:43.030000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffddd2a3dc0 a2=94 a3=2 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.030000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { perfmon } for pid=4445 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit[4445]: AVC avc: denied { bpf } for pid=4445 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.030000 audit: BPF prog-id=26 op=LOAD Sep 6 00:21:43.030000 audit[4445]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffddd2a3ec0 a2=94 a3=30 items=0 ppid=4323 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.030000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit: BPF prog-id=27 op=LOAD Sep 6 00:21:43.036000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2d291d00 a2=98 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.036000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.036000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit: BPF prog-id=28 op=LOAD Sep 6 00:21:43.036000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2d291af0 a2=94 a3=54428f items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.036000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.036000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.036000 audit: BPF prog-id=29 op=LOAD Sep 6 00:21:43.036000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2d291b20 a2=94 a3=2 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.036000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.036000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit: BPF prog-id=30 op=LOAD Sep 6 00:21:43.209000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe2d2919e0 a2=94 a3=1 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.209000 audit: BPF prog-id=30 op=UNLOAD Sep 6 00:21:43.209000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.209000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe2d291ab0 a2=50 a3=7ffe2d291b90 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.209000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d2919f0 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2d291a20 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2d291930 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d291a40 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d291a20 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d291a10 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d291a40 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2d291a20 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2d291a40 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2d291a10 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.221000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.221000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe2d291a80 a2=28 a3=0 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.221000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe2d291830 a2=50 a3=1 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit: BPF prog-id=31 op=LOAD Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe2d291830 a2=94 a3=5 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit: BPF prog-id=31 op=UNLOAD Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe2d2918e0 a2=50 a3=1 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe2d291a00 a2=4 a3=38 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { confidentiality } for pid=4447 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2d291a50 a2=94 a3=6 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { confidentiality } for pid=4447 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2d291200 a2=94 a3=88 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { perfmon } for pid=4447 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.222000 audit[4447]: AVC avc: denied { confidentiality } for pid=4447 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:21:43.222000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe2d291200 a2=94 a3=88 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.222000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.223000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.223000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2d292c30 a2=10 a3=f8f00800 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.223000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.223000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2d292ad0 a2=10 a3=3 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.223000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.223000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2d292a70 a2=10 a3=3 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.223000 audit[4447]: AVC avc: denied { bpf } for pid=4447 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:21:43.223000 audit[4447]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe2d292a70 a2=10 a3=7 items=0 ppid=4323 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:21:43.231000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:21:43.319000 audit[4472]: NETFILTER_CFG table=filter:105 family=2 entries=13 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:43.319000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffefc23be10 a2=0 a3=7ffefc23bdfc items=0 ppid=2208 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:43.325000 audit[4472]: NETFILTER_CFG table=nat:106 family=2 entries=27 op=nft_register_chain pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:43.325000 audit[4472]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffefc23be10 a2=0 a3=7ffefc23bdfc items=0 ppid=2208 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:43.362000 audit[4483]: NETFILTER_CFG table=mangle:107 family=2 entries=16 op=nft_register_chain pid=4483 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:21:43.362000 audit[4483]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeb1089820 a2=0 a3=7ffeb108980c items=0 ppid=4323 pid=4483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.362000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:21:43.385000 audit[4484]: NETFILTER_CFG table=nat:108 family=2 entries=15 op=nft_register_chain pid=4484 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:21:43.385000 audit[4484]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcb3fc0b10 a2=0 a3=7ffcb3fc0afc items=0 ppid=4323 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.385000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:21:43.397000 audit[4482]: NETFILTER_CFG table=raw:109 family=2 entries=21 op=nft_register_chain pid=4482 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:21:43.397000 audit[4482]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe8b886670 a2=0 a3=7ffe8b88665c items=0 ppid=4323 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:21:43.399000 audit[4489]: NETFILTER_CFG table=filter:110 family=2 entries=321 op=nft_register_chain pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:21:43.399000 audit[4489]: SYSCALL arch=c000003e syscall=46 success=yes exit=190616 a0=3 a1=7ffe0a81c250 a2=0 a3=560a906c0000 items=0 ppid=4323 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:43.399000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:21:44.360770 systemd-networkd[1050]: calid7faa9f2014: Gained IPv6LL Sep 6 00:21:44.733933 systemd-networkd[1050]: vxlan.calico: Gained IPv6LL Sep 6 00:21:45.558863 env[1286]: time="2025-09-06T00:21:45.558813643Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:45.561375 env[1286]: time="2025-09-06T00:21:45.561335648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:45.563749 env[1286]: time="2025-09-06T00:21:45.563680503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:45.565836 env[1286]: time="2025-09-06T00:21:45.565783960Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:45.567906 env[1286]: time="2025-09-06T00:21:45.567167763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 6 00:21:45.569233 env[1286]: time="2025-09-06T00:21:45.569204707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:21:45.602380 env[1286]: time="2025-09-06T00:21:45.602320624Z" level=info msg="CreateContainer within sandbox \"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 00:21:45.617981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837112002.mount: Deactivated successfully. Sep 6 00:21:45.624812 env[1286]: time="2025-09-06T00:21:45.624758811Z" level=info msg="CreateContainer within sandbox \"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"65c99e53f0a689e5588b1a744420e40d3aeea558b3cee7c4db0821d304928088\"" Sep 6 00:21:45.626151 env[1286]: time="2025-09-06T00:21:45.626113749Z" level=info msg="StartContainer for \"65c99e53f0a689e5588b1a744420e40d3aeea558b3cee7c4db0821d304928088\"" Sep 6 00:21:45.721770 env[1286]: time="2025-09-06T00:21:45.721547733Z" level=info msg="StartContainer for \"65c99e53f0a689e5588b1a744420e40d3aeea558b3cee7c4db0821d304928088\" returns successfully" Sep 6 00:21:46.018999 kubelet[2083]: I0906 00:21:46.018911 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bb788675f-mrpct" podStartSLOduration=26.548141069 podStartE2EDuration="34.018876917s" podCreationTimestamp="2025-09-06 00:21:12 +0000 UTC" firstStartedPulling="2025-09-06 00:21:38.098285667 +0000 UTC m=+45.747521847" lastFinishedPulling="2025-09-06 00:21:45.569021516 +0000 UTC m=+53.218257695" observedRunningTime="2025-09-06 00:21:46.014307477 +0000 UTC m=+53.663543678" watchObservedRunningTime="2025-09-06 00:21:46.018876917 +0000 UTC m=+53.668113117" Sep 6 00:21:47.188232 kernel: kauditd_printk_skb: 542 callbacks suppressed Sep 6 00:21:47.192833 kernel: audit: type=1130 audit(1757118107.180:421): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.227.108.127:22-147.75.109.163:56910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:47.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.227.108.127:22-147.75.109.163:56910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:47.181126 systemd[1]: Started sshd@8-64.227.108.127:22-147.75.109.163:56910.service. Sep 6 00:21:47.318000 audit[4556]: USER_ACCT pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.333741 kernel: audit: type=1101 audit(1757118107.318:422): pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.333867 sshd[4556]: Accepted publickey for core from 147.75.109.163 port 56910 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:47.342163 kernel: audit: type=1103 audit(1757118107.334:423): pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.342340 kernel: audit: type=1006 audit(1757118107.338:424): pid=4556 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 6 00:21:47.334000 audit[4556]: CRED_ACQ pid=4556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.348363 kernel: audit: type=1300 audit(1757118107.338:424): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe593efa40 a2=3 a3=0 items=0 ppid=1 pid=4556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.349989 kernel: audit: type=1327 audit(1757118107.338:424): proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:47.338000 audit[4556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe593efa40 a2=3 a3=0 items=0 ppid=1 pid=4556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:47.338000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:47.353369 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:47.377803 systemd-logind[1277]: New session 8 of user core. Sep 6 00:21:47.378765 systemd[1]: Started session-8.scope. Sep 6 00:21:47.391000 audit[4556]: USER_START pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.398778 kernel: audit: type=1105 audit(1757118107.391:425): pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.399000 audit[4570]: CRED_ACQ pid=4570 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:47.404807 kernel: audit: type=1103 audit(1757118107.399:426): pid=4570 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:48.282981 sshd[4556]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:48.282000 audit[4556]: USER_END pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:48.288737 kernel: audit: type=1106 audit(1757118108.282:427): pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:48.289142 kernel: audit: type=1104 audit(1757118108.286:428): pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:48.286000 audit[4556]: CRED_DISP pid=4556 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:48.293877 systemd[1]: sshd@8-64.227.108.127:22-147.75.109.163:56910.service: Deactivated successfully. Sep 6 00:21:48.294791 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:21:48.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-64.227.108.127:22-147.75.109.163:56910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:48.295896 systemd-logind[1277]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:21:48.296732 systemd-logind[1277]: Removed session 8. Sep 6 00:21:49.233900 env[1286]: time="2025-09-06T00:21:49.233845164Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:49.237654 env[1286]: time="2025-09-06T00:21:49.237604510Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:49.239480 env[1286]: time="2025-09-06T00:21:49.239437829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:49.241083 env[1286]: time="2025-09-06T00:21:49.241051830Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:49.241905 env[1286]: time="2025-09-06T00:21:49.241865978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 00:21:49.307313 env[1286]: time="2025-09-06T00:21:49.307013866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 00:21:49.417831 env[1286]: time="2025-09-06T00:21:49.417786241Z" level=info msg="CreateContainer within sandbox \"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:21:49.433761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075932917.mount: Deactivated successfully. Sep 6 00:21:49.438758 env[1286]: time="2025-09-06T00:21:49.438683042Z" level=info msg="CreateContainer within sandbox \"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c53dcccd1fa0d103ef5a691b685e29d7bb005779c4a8e8461505eeb1949b8a4\"" Sep 6 00:21:49.442079 env[1286]: time="2025-09-06T00:21:49.441359761Z" level=info msg="StartContainer for \"9c53dcccd1fa0d103ef5a691b685e29d7bb005779c4a8e8461505eeb1949b8a4\"" Sep 6 00:21:49.587468 env[1286]: time="2025-09-06T00:21:49.587340273Z" level=info msg="StartContainer for \"9c53dcccd1fa0d103ef5a691b685e29d7bb005779c4a8e8461505eeb1949b8a4\" returns successfully" Sep 6 00:21:50.307000 audit[4626]: NETFILTER_CFG table=filter:111 family=2 entries=12 op=nft_register_rule pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:50.307000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff5b995080 a2=0 a3=7fff5b99506c items=0 ppid=2208 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:50.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:50.311000 audit[4626]: NETFILTER_CFG table=nat:112 family=2 entries=22 op=nft_register_rule pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:50.311000 audit[4626]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff5b995080 a2=0 a3=7fff5b99506c items=0 ppid=2208 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:50.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:50.428802 systemd[1]: run-containerd-runc-k8s.io-9c53dcccd1fa0d103ef5a691b685e29d7bb005779c4a8e8461505eeb1949b8a4-runc.sMfzM1.mount: Deactivated successfully. Sep 6 00:21:50.945827 env[1286]: time="2025-09-06T00:21:50.945773882Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.947043 env[1286]: time="2025-09-06T00:21:50.947005559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.948700 env[1286]: time="2025-09-06T00:21:50.948663136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.949750 env[1286]: time="2025-09-06T00:21:50.949691502Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.951088 env[1286]: time="2025-09-06T00:21:50.950976561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 6 00:21:50.962245 env[1286]: time="2025-09-06T00:21:50.962117912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:21:50.980871 env[1286]: time="2025-09-06T00:21:50.979002578Z" level=info msg="CreateContainer within sandbox \"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 00:21:50.998735 env[1286]: time="2025-09-06T00:21:50.997113363Z" level=info msg="CreateContainer within sandbox \"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d3d29a24cd3012362b6423338993779bb28e78889f79d9abd605bfaf74eb179c\"" Sep 6 00:21:51.006077 env[1286]: time="2025-09-06T00:21:51.000293320Z" level=info msg="StartContainer for \"d3d29a24cd3012362b6423338993779bb28e78889f79d9abd605bfaf74eb179c\"" Sep 6 00:21:51.014962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109173541.mount: Deactivated successfully. Sep 6 00:21:51.107188 env[1286]: time="2025-09-06T00:21:51.107132038Z" level=info msg="StartContainer for \"d3d29a24cd3012362b6423338993779bb28e78889f79d9abd605bfaf74eb179c\" returns successfully" Sep 6 00:21:51.228642 kubelet[2083]: I0906 00:21:51.228508 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:51.316901 env[1286]: time="2025-09-06T00:21:51.316832873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:51.319213 env[1286]: time="2025-09-06T00:21:51.319173581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:51.320577 env[1286]: time="2025-09-06T00:21:51.320534364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:51.322443 env[1286]: time="2025-09-06T00:21:51.322394641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:51.323332 env[1286]: time="2025-09-06T00:21:51.323264288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 00:21:51.324836 env[1286]: time="2025-09-06T00:21:51.324777710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 00:21:51.328027 env[1286]: time="2025-09-06T00:21:51.327990292Z" level=info msg="CreateContainer within sandbox \"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:21:51.344911 env[1286]: time="2025-09-06T00:21:51.344864034Z" level=info msg="CreateContainer within sandbox \"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35725d80ee4293d49b40f2a80d89077724a0b1b70c86b4a9e40e038a88983f7b\"" Sep 6 00:21:51.347273 env[1286]: time="2025-09-06T00:21:51.345923640Z" level=info msg="StartContainer for \"35725d80ee4293d49b40f2a80d89077724a0b1b70c86b4a9e40e038a88983f7b\"" Sep 6 00:21:51.517959 env[1286]: time="2025-09-06T00:21:51.517181023Z" level=info msg="StartContainer for \"35725d80ee4293d49b40f2a80d89077724a0b1b70c86b4a9e40e038a88983f7b\" returns successfully" Sep 6 00:21:52.236976 kubelet[2083]: I0906 00:21:52.233543 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b54c6ffc-xjcwv" podStartSLOduration=33.916648593 podStartE2EDuration="43.230723321s" podCreationTimestamp="2025-09-06 00:21:09 +0000 UTC" firstStartedPulling="2025-09-06 00:21:39.984926545 +0000 UTC m=+47.634162724" lastFinishedPulling="2025-09-06 00:21:49.29900126 +0000 UTC m=+56.948237452" observedRunningTime="2025-09-06 00:21:50.254026836 +0000 UTC m=+57.903263034" watchObservedRunningTime="2025-09-06 00:21:52.230723321 +0000 UTC m=+59.879959511" Sep 6 00:21:52.316000 audit[4699]: NETFILTER_CFG table=filter:113 family=2 entries=12 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:52.318134 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:21:52.318423 kernel: audit: type=1325 audit(1757118112.316:432): table=filter:113 family=2 entries=12 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:52.316000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff57805500 a2=0 a3=7fff578054ec items=0 ppid=2208 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:52.323744 kernel: audit: type=1300 audit(1757118112.316:432): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff57805500 a2=0 a3=7fff578054ec items=0 ppid=2208 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:52.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:52.325960 kernel: audit: type=1327 audit(1757118112.316:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:52.326000 audit[4699]: NETFILTER_CFG table=nat:114 family=2 entries=22 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:52.326000 audit[4699]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff57805500 a2=0 a3=7fff578054ec items=0 ppid=2208 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:52.333918 kernel: audit: type=1325 audit(1757118112.326:433): table=nat:114 family=2 entries=22 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:52.334021 kernel: audit: type=1300 audit(1757118112.326:433): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff57805500 a2=0 a3=7fff578054ec items=0 ppid=2208 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:52.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:52.335732 kernel: audit: type=1327 audit(1757118112.326:433): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:52.560964 env[1286]: time="2025-09-06T00:21:52.560676358Z" level=info msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" Sep 6 00:21:53.244773 kubelet[2083]: I0906 00:21:53.244735 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:21:53.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.227.108.127:22-147.75.109.163:36590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:53.291617 systemd[1]: Started sshd@9-64.227.108.127:22-147.75.109.163:36590.service. Sep 6 00:21:53.295765 kernel: audit: type=1130 audit(1757118113.291:434): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.227.108.127:22-147.75.109.163:36590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:53.493845 sshd[4722]: Accepted publickey for core from 147.75.109.163 port 36590 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:53.493000 audit[4722]: USER_ACCT pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.501304 kernel: audit: type=1101 audit(1757118113.493:435): pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.501000 audit[4722]: CRED_ACQ pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.507365 kernel: audit: type=1103 audit(1757118113.501:436): pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.507512 kernel: audit: type=1006 audit(1757118113.501:437): pid=4722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 6 00:21:53.501000 audit[4722]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeffad08b0 a2=3 a3=0 items=0 ppid=1 pid=4722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:53.501000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:53.507925 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:53.544968 systemd-logind[1277]: New session 9 of user core. Sep 6 00:21:53.545569 systemd[1]: Started session-9.scope. Sep 6 00:21:53.574000 audit[4722]: USER_START pid=4722 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.584000 audit[4734]: CRED_ACQ pid=4734 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.055 [WARNING][4708] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd", Pod:"calico-apiserver-69b54c6ffc-xjcwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e4648bbb28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.057 [INFO][4708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.057 [INFO][4708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" iface="eth0" netns="" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.057 [INFO][4708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.057 [INFO][4708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.587 [INFO][4717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.588 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.588 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.617 [WARNING][4717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.617 [INFO][4717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.620 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:53.635153 env[1286]: 2025-09-06 00:21:53.623 [INFO][4708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:53.645532 env[1286]: time="2025-09-06T00:21:53.642899067Z" level=info msg="TearDown network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" successfully" Sep 6 00:21:53.645532 env[1286]: time="2025-09-06T00:21:53.642945709Z" level=info msg="StopPodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" returns successfully" Sep 6 00:21:54.702050 sshd[4722]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:54.704000 audit[4722]: USER_END pid=4722 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:54.704000 audit[4722]: CRED_DISP pid=4722 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:54.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-64.227.108.127:22-147.75.109.163:36590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:54.708225 systemd[1]: sshd@9-64.227.108.127:22-147.75.109.163:36590.service: Deactivated successfully. Sep 6 00:21:54.709621 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:21:54.709874 systemd-logind[1277]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:21:54.712605 systemd-logind[1277]: Removed session 9. Sep 6 00:21:54.888508 env[1286]: time="2025-09-06T00:21:54.887836699Z" level=info msg="RemovePodSandbox for \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" Sep 6 00:21:54.888508 env[1286]: time="2025-09-06T00:21:54.887899228Z" level=info msg="Forcibly stopping sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\"" Sep 6 00:21:55.270584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690623438.mount: Deactivated successfully. Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.169 [WARNING][4757] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b20a7ea0-580e-468e-bb6b-4ff362ba7f7c", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"4b59a89b1e65ef806e1dc2e2c4fe2a40fd7256100cfc29e9413dc85566296dbd", Pod:"calico-apiserver-69b54c6ffc-xjcwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e4648bbb28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.171 [INFO][4757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.172 [INFO][4757] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" iface="eth0" netns="" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.172 [INFO][4757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.172 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.372 [INFO][4772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.375 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.376 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.412 [WARNING][4772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.415 [INFO][4772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" HandleID="k8s-pod-network.60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--xjcwv-eth0" Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.417 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:55.422494 env[1286]: 2025-09-06 00:21:55.420 [INFO][4757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0" Sep 6 00:21:55.424118 env[1286]: time="2025-09-06T00:21:55.422544004Z" level=info msg="TearDown network for sandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" successfully" Sep 6 00:21:55.426542 env[1286]: time="2025-09-06T00:21:55.426488491Z" level=info msg="RemovePodSandbox \"60bd226ac192613455d039aa50afa6b9a6e32aaeeab5795bceb88502e8f6aff0\" returns successfully" Sep 6 00:21:55.433315 env[1286]: time="2025-09-06T00:21:55.433229305Z" level=info msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.510 [WARNING][4800] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"600d572a-5949-4e09-892e-2ade90d1ec2c", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43", Pod:"goldmane-7988f88666-m85r4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7faa9f2014", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.510 [INFO][4800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.510 [INFO][4800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" iface="eth0" netns="" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.510 [INFO][4800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.510 [INFO][4800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.544 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.545 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.545 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.554 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.554 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.556 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:55.564342 env[1286]: 2025-09-06 00:21:55.560 [INFO][4800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.565756 env[1286]: time="2025-09-06T00:21:55.564322612Z" level=info msg="TearDown network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" successfully" Sep 6 00:21:55.565832 env[1286]: time="2025-09-06T00:21:55.565745543Z" level=info msg="StopPodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" returns successfully" Sep 6 00:21:55.566407 env[1286]: time="2025-09-06T00:21:55.566365368Z" level=info msg="RemovePodSandbox for \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" Sep 6 00:21:55.566482 env[1286]: time="2025-09-06T00:21:55.566412594Z" level=info msg="Forcibly stopping sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\"" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.624 [WARNING][4824] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"600d572a-5949-4e09-892e-2ade90d1ec2c", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43", Pod:"goldmane-7988f88666-m85r4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7faa9f2014", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.627 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.628 [INFO][4824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" iface="eth0" netns="" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.628 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.628 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.668 [INFO][4831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.668 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.668 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.677 [WARNING][4831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.677 [INFO][4831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" HandleID="k8s-pod-network.5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-goldmane--7988f88666--m85r4-eth0" Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.679 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:55.684942 env[1286]: 2025-09-06 00:21:55.682 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b" Sep 6 00:21:55.686168 env[1286]: time="2025-09-06T00:21:55.684962239Z" level=info msg="TearDown network for sandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" successfully" Sep 6 00:21:55.688061 env[1286]: time="2025-09-06T00:21:55.688005753Z" level=info msg="RemovePodSandbox \"5fb7a9513a207c8ef9706fce0e1b974d86ec0c70bcb2fb1233b8c9755ba4468b\" returns successfully" Sep 6 00:21:55.688860 env[1286]: time="2025-09-06T00:21:55.688825138Z" level=info msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.766 [WARNING][4847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9086b069-766b-4d15-aa22-c0caba04aa75", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808", Pod:"csi-node-driver-sldlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59749933b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.767 [INFO][4847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.767 [INFO][4847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" iface="eth0" netns="" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.767 [INFO][4847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.767 [INFO][4847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.806 [INFO][4854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.806 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.806 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.814 [WARNING][4854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.814 [INFO][4854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.817 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:55.823894 env[1286]: 2025-09-06 00:21:55.821 [INFO][4847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:55.825451 env[1286]: time="2025-09-06T00:21:55.823868208Z" level=info msg="TearDown network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" successfully" Sep 6 00:21:55.825451 env[1286]: time="2025-09-06T00:21:55.824565559Z" level=info msg="StopPodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" returns successfully" Sep 6 00:21:55.825451 env[1286]: time="2025-09-06T00:21:55.825367420Z" level=info msg="RemovePodSandbox for \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" Sep 6 00:21:55.825451 env[1286]: time="2025-09-06T00:21:55.825400167Z" level=info msg="Forcibly stopping sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\"" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.917 [WARNING][4868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9086b069-766b-4d15-aa22-c0caba04aa75", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808", Pod:"csi-node-driver-sldlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali59749933b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.918 [INFO][4868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.918 [INFO][4868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" iface="eth0" netns="" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.918 [INFO][4868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.918 [INFO][4868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.987 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.988 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.988 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.995 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.995 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" HandleID="k8s-pod-network.06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-csi--node--driver--sldlw-eth0" Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.997 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.001790 env[1286]: 2025-09-06 00:21:55.999 [INFO][4868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff" Sep 6 00:21:56.002804 env[1286]: time="2025-09-06T00:21:56.001824533Z" level=info msg="TearDown network for sandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" successfully" Sep 6 00:21:56.004965 env[1286]: time="2025-09-06T00:21:56.004905823Z" level=info msg="RemovePodSandbox \"06d7aeb16db4113803a9f05bb8ac04e618bce4e7e2bb91ec654c3f4b831224ff\" returns successfully" Sep 6 00:21:56.005678 env[1286]: time="2025-09-06T00:21:56.005646364Z" level=info msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.094 [WARNING][4890] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.101 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.101 [INFO][4890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" iface="eth0" netns="" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.101 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.101 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.154 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.155 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.155 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.161 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.162 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.163 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.169265 env[1286]: 2025-09-06 00:21:56.166 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.171671 env[1286]: time="2025-09-06T00:21:56.169646563Z" level=info msg="TearDown network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" successfully" Sep 6 00:21:56.171671 env[1286]: time="2025-09-06T00:21:56.169688279Z" level=info msg="StopPodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" returns successfully" Sep 6 00:21:56.171671 env[1286]: time="2025-09-06T00:21:56.170515348Z" level=info msg="RemovePodSandbox for \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" Sep 6 00:21:56.171671 env[1286]: time="2025-09-06T00:21:56.170548597Z" level=info msg="Forcibly stopping sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\"" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.237 [WARNING][4912] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" WorkloadEndpoint="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.237 [INFO][4912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.237 [INFO][4912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" iface="eth0" netns="" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.237 [INFO][4912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.237 [INFO][4912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.276 [INFO][4919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.276 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.276 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.285 [WARNING][4919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.285 [INFO][4919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" HandleID="k8s-pod-network.b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-whisker--776955cbdc--vpgd2-eth0" Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.287 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.293209 env[1286]: 2025-09-06 00:21:56.290 [INFO][4912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960" Sep 6 00:21:56.294086 env[1286]: time="2025-09-06T00:21:56.294044927Z" level=info msg="TearDown network for sandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" successfully" Sep 6 00:21:56.297265 env[1286]: time="2025-09-06T00:21:56.297221519Z" level=info msg="RemovePodSandbox \"b6ab7e825eede996f3e056aaf11e4d507085d83489c49b0b8d457c3ce3784960\" returns successfully" Sep 6 00:21:56.298324 env[1286]: time="2025-09-06T00:21:56.298293784Z" level=info msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.386 [WARNING][4934] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68db3c5f-2540-494b-b7e8-8a05c1287332", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc", Pod:"coredns-7c65d6cfc9-wghq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali294bfc57275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.387 [INFO][4934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.387 [INFO][4934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" iface="eth0" netns="" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.387 [INFO][4934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.387 [INFO][4934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.438 [INFO][4942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.438 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.438 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.446 [WARNING][4942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.446 [INFO][4942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.448 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.454092 env[1286]: 2025-09-06 00:21:56.451 [INFO][4934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.454092 env[1286]: time="2025-09-06T00:21:56.454027747Z" level=info msg="TearDown network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" successfully" Sep 6 00:21:56.454092 env[1286]: time="2025-09-06T00:21:56.454058745Z" level=info msg="StopPodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" returns successfully" Sep 6 00:21:56.456160 env[1286]: time="2025-09-06T00:21:56.456124819Z" level=info msg="RemovePodSandbox for \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" Sep 6 00:21:56.456343 env[1286]: time="2025-09-06T00:21:56.456286957Z" level=info msg="Forcibly stopping sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\"" Sep 6 00:21:56.459945 env[1286]: time="2025-09-06T00:21:56.459885089Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:56.462530 env[1286]: time="2025-09-06T00:21:56.462473604Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:56.464995 env[1286]: time="2025-09-06T00:21:56.464951898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:56.466222 env[1286]: time="2025-09-06T00:21:56.466180157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 6 00:21:56.466914 env[1286]: time="2025-09-06T00:21:56.466880870Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:56.485758 env[1286]: time="2025-09-06T00:21:56.485169409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 00:21:56.611871 env[1286]: time="2025-09-06T00:21:56.611806368Z" level=info msg="CreateContainer within sandbox \"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 00:21:56.630485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375340206.mount: Deactivated successfully. Sep 6 00:21:56.638986 env[1286]: time="2025-09-06T00:21:56.634246471Z" level=info msg="CreateContainer within sandbox \"88ca1944105ab9a370b07a56f10d424df4f4ae648171e3e7eb5e633e25f7ec43\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813\"" Sep 6 00:21:56.638986 env[1286]: time="2025-09-06T00:21:56.636623929Z" level=info msg="StartContainer for \"45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813\"" Sep 6 00:21:56.680645 systemd[1]: run-containerd-runc-k8s.io-45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813-runc.fOWxZN.mount: Deactivated successfully. Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.556 [WARNING][4957] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68db3c5f-2540-494b-b7e8-8a05c1287332", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"c48a5897be856341bc6ce32e57ff0c0cfaa5db1ce41a04d99de4e6b6fd3957bc", Pod:"coredns-7c65d6cfc9-wghq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali294bfc57275", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.557 [INFO][4957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.557 [INFO][4957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" iface="eth0" netns="" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.557 [INFO][4957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.557 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.615 [INFO][4964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.615 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.615 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.646 [WARNING][4964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.646 [INFO][4964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" HandleID="k8s-pod-network.bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--wghq8-eth0" Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.663 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.701383 env[1286]: 2025-09-06 00:21:56.666 [INFO][4957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3" Sep 6 00:21:56.703626 env[1286]: time="2025-09-06T00:21:56.703584619Z" level=info msg="TearDown network for sandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" successfully" Sep 6 00:21:56.723727 env[1286]: time="2025-09-06T00:21:56.722189012Z" level=info msg="RemovePodSandbox \"bd708b67200810a95bad8a4aa6598f210d4cab65a4758fdcde4f18a4465c28d3\" returns successfully" Sep 6 00:21:56.726793 env[1286]: time="2025-09-06T00:21:56.726742435Z" level=info msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" Sep 6 00:21:56.808105 env[1286]: time="2025-09-06T00:21:56.808039351Z" level=info msg="StartContainer for \"45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813\" returns successfully" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.814 [WARNING][5005] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0", GenerateName:"calico-kube-controllers-7bb788675f-", Namespace:"calico-system", SelfLink:"", UID:"319af4ab-7a09-4b77-906d-ff7b3b6e2b69", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb788675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e", Pod:"calico-kube-controllers-7bb788675f-mrpct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5547e40703c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.815 [INFO][5005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.815 [INFO][5005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" iface="eth0" netns="" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.815 [INFO][5005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.815 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.854 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.854 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.854 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.864 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.864 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.866 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:56.872637 env[1286]: 2025-09-06 00:21:56.870 [INFO][5005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:56.874272 env[1286]: time="2025-09-06T00:21:56.872981299Z" level=info msg="TearDown network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" successfully" Sep 6 00:21:56.874272 env[1286]: time="2025-09-06T00:21:56.873022985Z" level=info msg="StopPodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" returns successfully" Sep 6 00:21:56.875432 env[1286]: time="2025-09-06T00:21:56.874443931Z" level=info msg="RemovePodSandbox for \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" Sep 6 00:21:56.875432 env[1286]: time="2025-09-06T00:21:56.874480620Z" level=info msg="Forcibly stopping sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\"" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:56.945 [WARNING][5037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0", GenerateName:"calico-kube-controllers-7bb788675f-", Namespace:"calico-system", SelfLink:"", UID:"319af4ab-7a09-4b77-906d-ff7b3b6e2b69", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb788675f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"f7cbd916f24cd707ad632f5ec4bd1f1608e4fc1557a86beee58cc0cdcb8fac0e", Pod:"calico-kube-controllers-7bb788675f-mrpct", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5547e40703c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:56.946 [INFO][5037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:56.946 [INFO][5037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" iface="eth0" netns="" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:56.946 [INFO][5037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:56.946 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.011 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.011 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.017 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.025 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.025 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" HandleID="k8s-pod-network.e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--kube--controllers--7bb788675f--mrpct-eth0" Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.035 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:57.049819 env[1286]: 2025-09-06 00:21:57.039 [INFO][5037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c" Sep 6 00:21:57.049819 env[1286]: time="2025-09-06T00:21:57.048087636Z" level=info msg="TearDown network for sandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" successfully" Sep 6 00:21:57.055238 env[1286]: time="2025-09-06T00:21:57.055173348Z" level=info msg="RemovePodSandbox \"e19409e81f02f4abf90c9e748d3d69da2b3654f2667bf5509eee586e8a1a3b7c\" returns successfully" Sep 6 00:21:57.089998 env[1286]: time="2025-09-06T00:21:57.089939249Z" level=info msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" Sep 6 00:21:57.205939 kubelet[2083]: I0906 00:21:57.193903 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b54c6ffc-ffpz9" podStartSLOduration=38.508517719 podStartE2EDuration="48.175368986s" podCreationTimestamp="2025-09-06 00:21:09 +0000 UTC" firstStartedPulling="2025-09-06 00:21:41.657692871 +0000 UTC m=+49.306929050" lastFinishedPulling="2025-09-06 00:21:51.324544129 +0000 UTC m=+58.973780317" observedRunningTime="2025-09-06 00:21:52.24360328 +0000 UTC m=+59.892839477" watchObservedRunningTime="2025-09-06 00:21:57.175368986 +0000 UTC m=+64.824605165" Sep 6 00:21:57.208556 kubelet[2083]: I0906 00:21:57.207100 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-m85r4" podStartSLOduration=32.375564821 podStartE2EDuration="46.207072921s" podCreationTimestamp="2025-09-06 00:21:11 +0000 UTC" firstStartedPulling="2025-09-06 00:21:42.64631973 +0000 UTC m=+50.295555909" lastFinishedPulling="2025-09-06 00:21:56.477827815 +0000 UTC m=+64.127064009" observedRunningTime="2025-09-06 00:21:57.151675826 +0000 UTC m=+64.800912030" watchObservedRunningTime="2025-09-06 00:21:57.207072921 +0000 UTC m=+64.856309121" Sep 6 00:21:57.240000 audit[5073]: NETFILTER_CFG table=filter:115 family=2 entries=12 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.240000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffece37a750 a2=0 a3=7ffece37a73c items=0 ppid=2208 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.244000 audit[5073]: NETFILTER_CFG table=nat:116 family=2 entries=22 op=nft_register_rule pid=5073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:21:57.244000 audit[5073]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffece37a750 a2=0 a3=7ffece37a73c items=0 ppid=2208 pid=5073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:57.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.219 [WARNING][5060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b8e70a2-2b3d-4784-830f-5560042446fa", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19", Pod:"calico-apiserver-69b54c6ffc-ffpz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib65704dedb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.223 [INFO][5060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.223 [INFO][5060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" iface="eth0" netns="" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.223 [INFO][5060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.223 [INFO][5060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.266 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.266 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.266 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.274 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.274 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.278 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:57.283327 env[1286]: 2025-09-06 00:21:57.281 [INFO][5060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.284748 env[1286]: time="2025-09-06T00:21:57.284331882Z" level=info msg="TearDown network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" successfully" Sep 6 00:21:57.284748 env[1286]: time="2025-09-06T00:21:57.284411347Z" level=info msg="StopPodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" returns successfully" Sep 6 00:21:57.285697 env[1286]: time="2025-09-06T00:21:57.285646791Z" level=info msg="RemovePodSandbox for \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" Sep 6 00:21:57.285977 env[1286]: time="2025-09-06T00:21:57.285914253Z" level=info msg="Forcibly stopping sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\"" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.361 [WARNING][5085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0", GenerateName:"calico-apiserver-69b54c6ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b8e70a2-2b3d-4784-830f-5560042446fa", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b54c6ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"b918203405a99f3da2d7e20da4c339c638aae8b5957133b6cdd65532da752d19", Pod:"calico-apiserver-69b54c6ffc-ffpz9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib65704dedb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.362 [INFO][5085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.362 [INFO][5085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" iface="eth0" netns="" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.362 [INFO][5085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.362 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.395 [INFO][5092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.396 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.396 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.408 [WARNING][5092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.408 [INFO][5092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" HandleID="k8s-pod-network.9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-calico--apiserver--69b54c6ffc--ffpz9-eth0" Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.411 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:57.417185 env[1286]: 2025-09-06 00:21:57.414 [INFO][5085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00" Sep 6 00:21:57.417185 env[1286]: time="2025-09-06T00:21:57.417135049Z" level=info msg="TearDown network for sandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" successfully" Sep 6 00:21:57.420566 env[1286]: time="2025-09-06T00:21:57.420499155Z" level=info msg="RemovePodSandbox \"9ee1af4f7a3f9956a6fd46614560303a73267ed6235d9d99d0f2289c98f74d00\" returns successfully" Sep 6 00:21:57.421226 env[1286]: time="2025-09-06T00:21:57.421186031Z" level=info msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.468 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1fe05095-1fb2-4aab-8036-ef685518a4c9", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1", Pod:"coredns-7c65d6cfc9-4vsk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6ff154353e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.469 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.469 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" iface="eth0" netns="" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.469 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.469 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.507 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.507 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.507 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.516 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.516 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.519 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:57.524548 env[1286]: 2025-09-06 00:21:57.521 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.526055 env[1286]: time="2025-09-06T00:21:57.525127888Z" level=info msg="TearDown network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" successfully" Sep 6 00:21:57.526055 env[1286]: time="2025-09-06T00:21:57.525176002Z" level=info msg="StopPodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" returns successfully" Sep 6 00:21:57.526055 env[1286]: time="2025-09-06T00:21:57.525800309Z" level=info msg="RemovePodSandbox for \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" Sep 6 00:21:57.526055 env[1286]: time="2025-09-06T00:21:57.525832813Z" level=info msg="Forcibly stopping sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\"" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.581 [WARNING][5130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1fe05095-1fb2-4aab-8036-ef685518a4c9", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 20, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0d6cc4df9c", ContainerID:"3db099d85aff7257868cc2a40a561213df1202a50727b401cc82adac48036ba1", Pod:"coredns-7c65d6cfc9-4vsk6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6ff154353e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.582 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.582 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" iface="eth0" netns="" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.582 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.582 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.610 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.611 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.611 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.623 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.623 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" HandleID="k8s-pod-network.a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Workload="ci--3510.3.8--n--0d6cc4df9c-k8s-coredns--7c65d6cfc9--4vsk6-eth0" Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.626 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:21:57.631636 env[1286]: 2025-09-06 00:21:57.628 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c" Sep 6 00:21:57.633027 env[1286]: time="2025-09-06T00:21:57.631674296Z" level=info msg="TearDown network for sandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" successfully" Sep 6 00:21:57.634676 env[1286]: time="2025-09-06T00:21:57.634627066Z" level=info msg="RemovePodSandbox \"a2fa0617fb289dab806e7606662d1adca50ac277405db4f93745ce970c76003c\" returns successfully" Sep 6 00:21:58.853538 env[1286]: time="2025-09-06T00:21:58.853478619Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:58.855041 env[1286]: time="2025-09-06T00:21:58.854987210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:58.856440 env[1286]: time="2025-09-06T00:21:58.856407148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:58.857666 env[1286]: time="2025-09-06T00:21:58.857638372Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:58.858190 env[1286]: time="2025-09-06T00:21:58.858163410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 6 00:21:58.868911 env[1286]: time="2025-09-06T00:21:58.868860622Z" level=info msg="CreateContainer within sandbox \"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 00:21:58.886419 env[1286]: time="2025-09-06T00:21:58.886360392Z" level=info msg="CreateContainer within sandbox \"d0c8fa8798cd09a62f83fa4777b7ccbdab1fff9defedecb1ebdfd94707f6a808\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6584d7118466a52d18d5b253f8db4570c094793d6ba6dc09db146b5eee725d8\"" Sep 6 00:21:58.889546 env[1286]: time="2025-09-06T00:21:58.889483897Z" level=info msg="StartContainer for \"b6584d7118466a52d18d5b253f8db4570c094793d6ba6dc09db146b5eee725d8\"" Sep 6 00:21:59.010301 env[1286]: time="2025-09-06T00:21:59.010247948Z" level=info msg="StartContainer for \"b6584d7118466a52d18d5b253f8db4570c094793d6ba6dc09db146b5eee725d8\" returns successfully" Sep 6 00:21:59.106062 kubelet[2083]: I0906 00:21:59.104475 2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sldlw" podStartSLOduration=28.937717543 podStartE2EDuration="47.104442639s" podCreationTimestamp="2025-09-06 00:21:12 +0000 UTC" firstStartedPulling="2025-09-06 00:21:40.692664989 +0000 UTC m=+48.341901169" lastFinishedPulling="2025-09-06 00:21:58.859390074 +0000 UTC m=+66.508626265" observedRunningTime="2025-09-06 00:21:59.10398513 +0000 UTC m=+66.753221332" watchObservedRunningTime="2025-09-06 00:21:59.104442639 +0000 UTC m=+66.753678839" Sep 6 00:21:59.122631 systemd[1]: run-containerd-runc-k8s.io-45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813-runc.IoXVOR.mount: Deactivated successfully. Sep 6 00:21:59.723295 systemd[1]: Started sshd@10-64.227.108.127:22-147.75.109.163:36604.service. Sep 6 00:21:59.729152 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 6 00:21:59.731187 kernel: audit: type=1130 audit(1757118119.723:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.227.108.127:22-147.75.109.163:36604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:59.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.227.108.127:22-147.75.109.163:36604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:59.831000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.838026 kernel: audit: type=1101 audit(1757118119.831:446): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.838327 sshd[5225]: Accepted publickey for core from 147.75.109.163 port 36604 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:59.842840 kernel: audit: type=1103 audit(1757118119.838:447): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.838000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.849761 kernel: audit: type=1006 audit(1757118119.841:448): pid=5225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 6 00:21:59.849914 kernel: audit: type=1300 audit(1757118119.841:448): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea73636a0 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:59.849945 kernel: audit: type=1327 audit(1757118119.841:448): proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:59.841000 audit[5225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea73636a0 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:59.841000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:21:59.847437 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:59.866128 systemd-logind[1277]: New session 10 of user core. Sep 6 00:21:59.874418 systemd[1]: Started session-10.scope. Sep 6 00:21:59.889000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.898417 kernel: audit: type=1105 audit(1757118119.889:449): pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.898536 kernel: audit: type=1103 audit(1757118119.895:450): pid=5228 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.895000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:21:59.950694 kubelet[2083]: I0906 00:21:59.948865 2083 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 00:21:59.950694 kubelet[2083]: I0906 00:21:59.950349 2083 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 00:22:00.722872 sshd[5225]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:00.724000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.732889 kernel: audit: type=1106 audit(1757118120.724:451): pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.736233 systemd[1]: Started sshd@11-64.227.108.127:22-147.75.109.163:33688.service. Sep 6 00:22:00.737201 systemd[1]: sshd@10-64.227.108.127:22-147.75.109.163:36604.service: Deactivated successfully. Sep 6 00:22:00.724000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.743831 kernel: audit: type=1104 audit(1757118120.724:452): pid=5225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.742968 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:22:00.743234 systemd-logind[1277]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:22:00.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-64.227.108.127:22-147.75.109.163:33688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:00.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-64.227.108.127:22-147.75.109.163:36604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:00.745986 systemd-logind[1277]: Removed session 10. Sep 6 00:22:00.792000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.793917 sshd[5237]: Accepted publickey for core from 147.75.109.163 port 33688 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:00.794000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.795000 audit[5237]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf2a48610 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:00.795000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:00.797258 sshd[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:00.805102 systemd[1]: Started session-11.scope. Sep 6 00:22:00.805669 systemd-logind[1277]: New session 11 of user core. Sep 6 00:22:00.816000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:00.819000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.037337 sshd[5237]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:01.043596 systemd[1]: Started sshd@12-64.227.108.127:22-147.75.109.163:33692.service. Sep 6 00:22:01.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-64.227.108.127:22-147.75.109.163:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:01.043000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.047000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-64.227.108.127:22-147.75.109.163:33688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:01.050209 systemd[1]: sshd@11-64.227.108.127:22-147.75.109.163:33688.service: Deactivated successfully. Sep 6 00:22:01.051501 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:22:01.055943 systemd-logind[1277]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:22:01.064580 systemd-logind[1277]: Removed session 11. Sep 6 00:22:01.120000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.122056 sshd[5247]: Accepted publickey for core from 147.75.109.163 port 33692 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:01.123000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.123000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddf9244d0 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:01.123000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:01.125579 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:01.138215 systemd-logind[1277]: New session 12 of user core. Sep 6 00:22:01.139193 systemd[1]: Started session-12.scope. Sep 6 00:22:01.154000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.156000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.321862 sshd[5247]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:01.323000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.323000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:01.328999 systemd[1]: sshd@12-64.227.108.127:22-147.75.109.163:33692.service: Deactivated successfully. Sep 6 00:22:01.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-64.227.108.127:22-147.75.109.163:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:01.330700 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:22:01.331341 systemd-logind[1277]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:22:01.332414 systemd-logind[1277]: Removed session 12. Sep 6 00:22:01.631496 kubelet[2083]: I0906 00:22:01.630554 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:22:01.763280 kubelet[2083]: I0906 00:22:01.763227 2083 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:22:01.939000 audit[5283]: NETFILTER_CFG table=filter:117 family=2 entries=11 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:01.939000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe02a816e0 a2=0 a3=7ffe02a816cc items=0 ppid=2208 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:01.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:01.942000 audit[5283]: NETFILTER_CFG table=nat:118 family=2 entries=29 op=nft_register_chain pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:01.942000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe02a816e0 a2=0 a3=7ffe02a816cc items=0 ppid=2208 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:01.942000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:01.964000 audit[5285]: NETFILTER_CFG table=filter:119 family=2 entries=10 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:01.964000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffcfe16fa00 a2=0 a3=7ffcfe16f9ec items=0 ppid=2208 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:01.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:01.970000 audit[5285]: NETFILTER_CFG table=nat:120 family=2 entries=36 op=nft_register_chain pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:01.970000 audit[5285]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffcfe16fa00 a2=0 a3=7ffcfe16f9ec items=0 ppid=2208 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:01.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:06.328466 systemd[1]: Started sshd@13-64.227.108.127:22-147.75.109.163:33696.service. Sep 6 00:22:06.333806 kernel: kauditd_printk_skb: 35 callbacks suppressed Sep 6 00:22:06.334917 kernel: audit: type=1130 audit(1757118126.329:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-64.227.108.127:22-147.75.109.163:33696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:06.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-64.227.108.127:22-147.75.109.163:33696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:06.435000 audit[5314]: USER_ACCT pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.445365 kernel: audit: type=1101 audit(1757118126.435:477): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.445476 kernel: audit: type=1103 audit(1757118126.438:478): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.445509 kernel: audit: type=1006 audit(1757118126.438:479): pid=5314 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 6 00:22:06.438000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.446176 sshd[5314]: Accepted publickey for core from 147.75.109.163 port 33696 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:06.445100 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:06.438000 audit[5314]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec024a3f0 a2=3 a3=0 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:06.451921 kernel: audit: type=1300 audit(1757118126.438:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec024a3f0 a2=3 a3=0 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:06.438000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:06.463135 kernel: audit: type=1327 audit(1757118126.438:479): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:06.465982 systemd-logind[1277]: New session 13 of user core. Sep 6 00:22:06.466846 systemd[1]: Started session-13.scope. Sep 6 00:22:06.474000 audit[5314]: USER_START pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.479849 kernel: audit: type=1105 audit(1757118126.474:480): pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.474000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.485987 kernel: audit: type=1103 audit(1757118126.474:481): pid=5317 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.929980 sshd[5314]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:06.930000 audit[5314]: USER_END pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.934000 systemd-logind[1277]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:22:06.930000 audit[5314]: CRED_DISP pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.935858 systemd[1]: sshd@13-64.227.108.127:22-147.75.109.163:33696.service: Deactivated successfully. Sep 6 00:22:06.938609 kernel: audit: type=1106 audit(1757118126.930:482): pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.938747 kernel: audit: type=1104 audit(1757118126.930:483): pid=5314 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:06.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-64.227.108.127:22-147.75.109.163:33696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:06.939976 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:22:06.942385 systemd-logind[1277]: Removed session 13. Sep 6 00:22:07.641929 kubelet[2083]: E0906 00:22:07.641856 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:10.747452 systemd[1]: run-containerd-runc-k8s.io-3c188926570fb5fd00ca3c5f4e6cc57434f18ff0fe1a53f63be9611165781e23-runc.7hlgr0.mount: Deactivated successfully. Sep 6 00:22:11.935829 systemd[1]: Started sshd@14-64.227.108.127:22-147.75.109.163:35424.service. Sep 6 00:22:11.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-64.227.108.127:22-147.75.109.163:35424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:11.937205 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:22:11.937516 kernel: audit: type=1130 audit(1757118131.935:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-64.227.108.127:22-147.75.109.163:35424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.022000 audit[5350]: USER_ACCT pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.024842 sshd[5350]: Accepted publickey for core from 147.75.109.163 port 35424 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:12.026000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.029114 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:12.031022 kernel: audit: type=1101 audit(1757118132.022:486): pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.031151 kernel: audit: type=1103 audit(1757118132.026:487): pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.031182 kernel: audit: type=1006 audit(1757118132.026:488): pid=5350 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 6 00:22:12.033067 kernel: audit: type=1300 audit(1757118132.026:488): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3c33bf30 a2=3 a3=0 items=0 ppid=1 pid=5350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:12.026000 audit[5350]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3c33bf30 a2=3 a3=0 items=0 ppid=1 pid=5350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:12.026000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:12.037826 kernel: audit: type=1327 audit(1757118132.026:488): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:12.042850 systemd[1]: Started session-14.scope. Sep 6 00:22:12.043797 systemd-logind[1277]: New session 14 of user core. Sep 6 00:22:12.052000 audit[5350]: USER_START pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.058351 kernel: audit: type=1105 audit(1757118132.052:489): pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.058000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.062795 kernel: audit: type=1103 audit(1757118132.058:490): pid=5353 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.375106 sshd[5350]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:12.379000 audit[5350]: USER_END pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.382000 audit[5350]: CRED_DISP pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.387672 kernel: audit: type=1106 audit(1757118132.379:491): pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.388311 kernel: audit: type=1104 audit(1757118132.382:492): pid=5350 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:12.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-64.227.108.127:22-147.75.109.163:35424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.385587 systemd[1]: sshd@14-64.227.108.127:22-147.75.109.163:35424.service: Deactivated successfully. Sep 6 00:22:12.387125 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:22:12.388609 systemd-logind[1277]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:22:12.391529 systemd-logind[1277]: Removed session 14. Sep 6 00:22:12.614829 kubelet[2083]: E0906 00:22:12.614786 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.610889 kubelet[2083]: E0906 00:22:14.610834 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:17.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.227.108.127:22-147.75.109.163:35436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.378592 systemd[1]: Started sshd@15-64.227.108.127:22-147.75.109.163:35436.service. Sep 6 00:22:17.382383 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:22:17.382503 kernel: audit: type=1130 audit(1757118137.378:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.227.108.127:22-147.75.109.163:35436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.485000 audit[5362]: USER_ACCT pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.489776 sshd[5362]: Accepted publickey for core from 147.75.109.163 port 35436 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:17.490738 kernel: audit: type=1101 audit(1757118137.485:495): pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.490000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.493074 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:17.496452 kernel: audit: type=1103 audit(1757118137.490:496): pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.496575 kernel: audit: type=1006 audit(1757118137.490:497): pid=5362 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 6 00:22:17.490000 audit[5362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebc8b41b0 a2=3 a3=0 items=0 ppid=1 pid=5362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:17.500757 kernel: audit: type=1300 audit(1757118137.490:497): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebc8b41b0 a2=3 a3=0 items=0 ppid=1 pid=5362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:17.501328 kernel: audit: type=1327 audit(1757118137.490:497): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:17.490000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:17.507438 systemd-logind[1277]: New session 15 of user core. Sep 6 00:22:17.507858 systemd[1]: Started session-15.scope. Sep 6 00:22:17.515000 audit[5362]: USER_START pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.522411 kernel: audit: type=1105 audit(1757118137.515:498): pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.522000 audit[5365]: CRED_ACQ pid=5365 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.527874 kernel: audit: type=1103 audit(1757118137.522:499): pid=5365 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.610791 kubelet[2083]: E0906 00:22:17.610665 2083 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:17.963964 sshd[5362]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:17.964000 audit[5362]: USER_END pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.969750 kernel: audit: type=1106 audit(1757118137.964:500): pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.969000 audit[5362]: CRED_DISP pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.973769 kernel: audit: type=1104 audit(1757118137.969:501): pid=5362 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:17.976003 systemd-logind[1277]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:22:17.976176 systemd[1]: sshd@15-64.227.108.127:22-147.75.109.163:35436.service: Deactivated successfully. Sep 6 00:22:17.977168 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:22:17.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-64.227.108.127:22-147.75.109.163:35436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.978015 systemd-logind[1277]: Removed session 15. Sep 6 00:22:22.968929 systemd[1]: Started sshd@16-64.227.108.127:22-147.75.109.163:43008.service. Sep 6 00:22:22.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.227.108.127:22-147.75.109.163:43008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.970666 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:22:22.970779 kernel: audit: type=1130 audit(1757118142.967:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.227.108.127:22-147.75.109.163:43008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.040000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.046321 sshd[5374]: Accepted publickey for core from 147.75.109.163 port 43008 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:23.046742 kernel: audit: type=1101 audit(1757118143.040:504): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.047000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.052826 kernel: audit: type=1103 audit(1757118143.047:505): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.053932 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:23.051000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff894395a0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.058521 kernel: audit: type=1006 audit(1757118143.051:506): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 6 00:22:23.059053 kernel: audit: type=1300 audit(1757118143.051:506): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff894395a0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.051000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:23.061191 kernel: audit: type=1327 audit(1757118143.051:506): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:23.063309 systemd[1]: Started session-16.scope. Sep 6 00:22:23.064747 systemd-logind[1277]: New session 16 of user core. Sep 6 00:22:23.070000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.078023 kernel: audit: type=1105 audit(1757118143.070:507): pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.080321 kernel: audit: type=1103 audit(1757118143.075:508): pid=5377 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.075000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.338473 sshd[5374]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:23.341000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.346775 kernel: audit: type=1106 audit(1757118143.341:509): pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.347023 systemd[1]: Started sshd@17-64.227.108.127:22-147.75.109.163:43012.service. Sep 6 00:22:23.341000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.348387 systemd[1]: sshd@16-64.227.108.127:22-147.75.109.163:43008.service: Deactivated successfully. Sep 6 00:22:23.351830 kernel: audit: type=1104 audit(1757118143.341:510): pid=5374 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.351329 systemd-logind[1277]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:22:23.352253 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:22:23.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-64.227.108.127:22-147.75.109.163:43012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-64.227.108.127:22-147.75.109.163:43008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.354562 systemd-logind[1277]: Removed session 16. Sep 6 00:22:23.410000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.412994 sshd[5385]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:23.413000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.414000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc607734b0 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.414000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:23.417154 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:23.425059 systemd-logind[1277]: New session 17 of user core. Sep 6 00:22:23.425845 systemd[1]: Started session-17.scope. Sep 6 00:22:23.432000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.434000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.941879 sshd[5385]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:23.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-64.227.108.127:22-147.75.109.163:43026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.947624 systemd[1]: Started sshd@18-64.227.108.127:22-147.75.109.163:43026.service. Sep 6 00:22:23.947000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.947000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:23.952914 systemd-logind[1277]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:22:23.954871 systemd[1]: sshd@17-64.227.108.127:22-147.75.109.163:43012.service: Deactivated successfully. Sep 6 00:22:23.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-64.227.108.127:22-147.75.109.163:43012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.957251 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:22:23.958479 systemd-logind[1277]: Removed session 17. Sep 6 00:22:24.009000 audit[5404]: USER_ACCT pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:24.011938 sshd[5404]: Accepted publickey for core from 147.75.109.163 port 43026 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:24.012000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:24.012000 audit[5404]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0257d2d0 a2=3 a3=0 items=0 ppid=1 pid=5404 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:24.012000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:24.014891 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:24.020513 systemd-logind[1277]: New session 18 of user core. Sep 6 00:22:24.022185 systemd[1]: Started session-18.scope. Sep 6 00:22:24.031000 audit[5404]: USER_START pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:24.033000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:24.879903 systemd[1]: run-containerd-runc-k8s.io-45242f77059b822299057e61f73acd32fe61145e333d5a863310da40278e3813-runc.ENFyyQ.mount: Deactivated successfully. Sep 6 00:22:26.232000 audit[5461]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:26.232000 audit[5461]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe27f894a0 a2=0 a3=7ffe27f8948c items=0 ppid=2208 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.232000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:26.240000 audit[5461]: NETFILTER_CFG table=nat:122 family=2 entries=31 op=nft_register_chain pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:26.240000 audit[5461]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe27f894a0 a2=0 a3=7ffe27f8948c items=0 ppid=2208 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:26.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:26.922310 sshd[5404]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:26.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-64.227.108.127:22-147.75.109.163:43030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:26.967132 systemd[1]: Started sshd@19-64.227.108.127:22-147.75.109.163:43030.service. Sep 6 00:22:26.989000 audit[5404]: USER_END pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:26.990000 audit[5404]: CRED_DISP pid=5404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:26.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-64.227.108.127:22-147.75.109.163:43026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:26.994532 systemd[1]: sshd@18-64.227.108.127:22-147.75.109.163:43026.service: Deactivated successfully. Sep 6 00:22:26.996739 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:22:27.014000 audit[5466]: NETFILTER_CFG table=filter:123 family=2 entries=8 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.014000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdcd862830 a2=0 a3=7ffdcd86281c items=0 ppid=2208 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:26.996861 systemd-logind[1277]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:22:27.000478 systemd-logind[1277]: Removed session 18. Sep 6 00:22:27.066000 audit[5466]: NETFILTER_CFG table=nat:124 family=2 entries=26 op=nft_register_rule pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:27.066000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdcd862830 a2=0 a3=7ffdcd86281c items=0 ppid=2208 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:27.101281 sshd[5463]: Accepted publickey for core from 147.75.109.163 port 43030 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:27.099000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:27.101000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:27.101000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd97a9fb0 a2=3 a3=0 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:27.101000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:27.103954 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:27.111202 systemd-logind[1277]: New session 19 of user core. Sep 6 00:22:27.112860 systemd[1]: Started session-19.scope. Sep 6 00:22:27.129000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:27.131000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.085013 sshd[5463]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:28.091791 kernel: kauditd_printk_skb: 43 callbacks suppressed Sep 6 00:22:28.097166 kernel: audit: type=1106 audit(1757118148.085:540): pid=5463 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.099451 kernel: audit: type=1104 audit(1757118148.085:541): pid=5463 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.099551 kernel: audit: type=1131 audit(1757118148.093:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-64.227.108.127:22-147.75.109.163:43030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:28.085000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.085000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-64.227.108.127:22-147.75.109.163:43030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:28.094496 systemd[1]: sshd@19-64.227.108.127:22-147.75.109.163:43030.service: Deactivated successfully. Sep 6 00:22:28.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-64.227.108.127:22-147.75.109.163:43044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:28.108836 kernel: audit: type=1130 audit(1757118148.102:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-64.227.108.127:22-147.75.109.163:43044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:28.102092 systemd-logind[1277]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:22:28.103578 systemd[1]: Started sshd@20-64.227.108.127:22-147.75.109.163:43044.service. Sep 6 00:22:28.105452 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:22:28.109567 systemd-logind[1277]: Removed session 19. Sep 6 00:22:28.222037 sshd[5477]: Accepted publickey for core from 147.75.109.163 port 43044 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:28.220000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.221000 audit[5480]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:28.228514 kernel: audit: type=1101 audit(1757118148.220:544): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.228600 kernel: audit: type=1325 audit(1757118148.221:545): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:28.229120 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:28.221000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fffe304b930 a2=0 a3=7fffe304b91c items=0 ppid=2208 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:28.236885 kernel: audit: type=1300 audit(1757118148.221:545): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fffe304b930 a2=0 a3=7fffe304b91c items=0 ppid=2208 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:28.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:28.243412 kernel: audit: type=1327 audit(1757118148.221:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:28.243549 kernel: audit: type=1103 audit(1757118148.227:546): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.227000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.245263 systemd[1]: Started session-20.scope. Sep 6 00:22:28.246433 systemd-logind[1277]: New session 20 of user core. Sep 6 00:22:28.255749 kernel: audit: type=1006 audit(1757118148.227:547): pid=5477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Sep 6 00:22:28.227000 audit[5477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe500eaa70 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:28.227000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:28.259000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.266000 audit[5480]: NETFILTER_CFG table=nat:126 family=2 entries=26 op=nft_register_rule pid=5480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:28.266000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.266000 audit[5480]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fffe304b930 a2=0 a3=0 items=0 ppid=2208 pid=5480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:28.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:28.528547 sshd[5477]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:28.528000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.528000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:28.536098 systemd[1]: sshd@20-64.227.108.127:22-147.75.109.163:43044.service: Deactivated successfully. Sep 6 00:22:28.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-64.227.108.127:22-147.75.109.163:43044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:28.537858 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:22:28.538504 systemd-logind[1277]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:22:28.539535 systemd-logind[1277]: Removed session 20. Sep 6 00:22:33.342000 audit[5494]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:33.352611 kernel: kauditd_printk_skb: 10 callbacks suppressed Sep 6 00:22:33.353319 kernel: audit: type=1325 audit(1757118153.342:554): table=filter:127 family=2 entries=20 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:33.342000 audit[5494]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff99f01840 a2=0 a3=7fff99f0182c items=0 ppid=2208 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:33.358906 kernel: audit: type=1300 audit(1757118153.342:554): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff99f01840 a2=0 a3=7fff99f0182c items=0 ppid=2208 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:33.359397 kernel: audit: type=1327 audit(1757118153.342:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:33.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:33.360000 audit[5494]: NETFILTER_CFG table=nat:128 family=2 entries=110 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:33.360000 audit[5494]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff99f01840 a2=0 a3=7fff99f0182c items=0 ppid=2208 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:33.369871 kernel: audit: type=1325 audit(1757118153.360:555): table=nat:128 family=2 entries=110 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:22:33.370020 kernel: audit: type=1300 audit(1757118153.360:555): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fff99f01840 a2=0 a3=7fff99f0182c items=0 ppid=2208 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:33.370517 kernel: audit: type=1327 audit(1757118153.360:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:33.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:22:33.541427 systemd[1]: Started sshd@21-64.227.108.127:22-147.75.109.163:59770.service. Sep 6 00:22:33.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-64.227.108.127:22-147.75.109.163:59770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:33.545789 kernel: audit: type=1130 audit(1757118153.540:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-64.227.108.127:22-147.75.109.163:59770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:33.659000 audit[5496]: USER_ACCT pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:33.664092 sshd[5496]: Accepted publickey for core from 147.75.109.163 port 59770 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:33.664901 kernel: audit: type=1101 audit(1757118153.659:557): pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:33.664000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:33.670197 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:33.672634 kernel: audit: type=1103 audit(1757118153.664:558): pid=5496 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:33.672812 kernel: audit: type=1006 audit(1757118153.664:559): pid=5496 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 6 00:22:33.664000 audit[5496]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed63fc2c0 a2=3 a3=0 items=0 ppid=1 pid=5496 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:33.664000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:33.680741 systemd-logind[1277]: New session 21 of user core. Sep 6 00:22:33.681470 systemd[1]: Started session-21.scope. Sep 6 00:22:33.686000 audit[5496]: USER_START pid=5496 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:33.688000 audit[5499]: CRED_ACQ pid=5499 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:34.036464 sshd[5496]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:34.037000 audit[5496]: USER_END pid=5496 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:34.037000 audit[5496]: CRED_DISP pid=5496 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:34.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-64.227.108.127:22-147.75.109.163:59770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:34.042454 systemd[1]: sshd@21-64.227.108.127:22-147.75.109.163:59770.service: Deactivated successfully. Sep 6 00:22:34.044024 systemd-logind[1277]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:22:34.044040 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:22:34.045433 systemd-logind[1277]: Removed session 21. Sep 6 00:22:39.045469 systemd[1]: Started sshd@22-64.227.108.127:22-147.75.109.163:59780.service. Sep 6 00:22:39.048314 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:22:39.050838 kernel: audit: type=1130 audit(1757118159.045:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-64.227.108.127:22-147.75.109.163:59780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:39.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-64.227.108.127:22-147.75.109.163:59780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:39.146000 audit[5508]: USER_ACCT pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.150732 kernel: audit: type=1101 audit(1757118159.146:566): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.151831 sshd[5508]: Accepted publickey for core from 147.75.109.163 port 59780 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:39.152000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.162484 kernel: audit: type=1103 audit(1757118159.152:567): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.162577 kernel: audit: type=1006 audit(1757118159.152:568): pid=5508 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 6 00:22:39.162624 kernel: audit: type=1300 audit(1757118159.152:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcac2da90 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:39.163806 kernel: audit: type=1327 audit(1757118159.152:568): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:39.152000 audit[5508]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcac2da90 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:39.152000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:39.164254 sshd[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:39.172373 systemd[1]: Started session-22.scope. Sep 6 00:22:39.173230 systemd-logind[1277]: New session 22 of user core. Sep 6 00:22:39.180000 audit[5508]: USER_START pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.182000 audit[5511]: CRED_ACQ pid=5511 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.188870 kernel: audit: type=1105 audit(1757118159.180:569): pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.189277 kernel: audit: type=1103 audit(1757118159.182:570): pid=5511 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.486356 sshd[5508]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:39.491000 audit[5508]: USER_END pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.491000 audit[5508]: CRED_DISP pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.497164 systemd[1]: sshd@22-64.227.108.127:22-147.75.109.163:59780.service: Deactivated successfully. Sep 6 00:22:39.498198 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:22:39.498900 kernel: audit: type=1106 audit(1757118159.491:571): pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.498969 kernel: audit: type=1104 audit(1757118159.491:572): pid=5508 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:39.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-64.227.108.127:22-147.75.109.163:59780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:39.499483 systemd-logind[1277]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:22:39.500559 systemd-logind[1277]: Removed session 22. Sep 6 00:22:40.050553 systemd[1]: Started sshd@23-64.227.108.127:22-221.226.17.34:41916.service. Sep 6 00:22:40.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.227.108.127:22-221.226.17.34:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:40.896813 sshd[5520]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.226.17.34 user=root Sep 6 00:22:40.896000 audit[5520]: USER_AUTH pid=5520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=221.226.17.34 addr=221.226.17.34 terminal=ssh res=failed' Sep 6 00:22:42.781480 sshd[5520]: Failed password for root from 221.226.17.34 port 41916 ssh2 Sep 6 00:22:44.494063 systemd[1]: Started sshd@24-64.227.108.127:22-147.75.109.163:35772.service. Sep 6 00:22:44.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.227.108.127:22-147.75.109.163:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.496423 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 6 00:22:44.499559 kernel: audit: type=1130 audit(1757118164.494:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.227.108.127:22-147.75.109.163:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.597000 audit[5544]: USER_ACCT pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.602103 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 35772 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:44.603527 kernel: audit: type=1101 audit(1757118164.597:577): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.603000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.609423 kernel: audit: type=1103 audit(1757118164.603:578): pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.609587 kernel: audit: type=1006 audit(1757118164.604:579): pid=5544 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 6 00:22:44.611015 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:44.604000 audit[5544]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeeb797440 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:44.615737 kernel: audit: type=1300 audit(1757118164.604:579): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeeb797440 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:44.604000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:44.622754 kernel: audit: type=1327 audit(1757118164.604:579): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:44.636417 systemd-logind[1277]: New session 23 of user core. Sep 6 00:22:44.637292 systemd[1]: Started session-23.scope. Sep 6 00:22:44.644000 audit[5544]: USER_START pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.650136 kernel: audit: type=1105 audit(1757118164.644:580): pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.650000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.655516 kernel: audit: type=1103 audit(1757118164.650:581): pid=5547 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:44.785258 sshd[5520]: Received disconnect from 221.226.17.34 port 41916:11: Bye Bye [preauth] Sep 6 00:22:44.785258 sshd[5520]: Disconnected from authenticating user root 221.226.17.34 port 41916 [preauth] Sep 6 00:22:44.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.227.108.127:22-221.226.17.34:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.790758 kernel: audit: type=1131 audit(1757118164.786:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-64.227.108.127:22-221.226.17.34:41916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:44.787221 systemd[1]: sshd@23-64.227.108.127:22-221.226.17.34:41916.service: Deactivated successfully. Sep 6 00:22:45.571423 sshd[5544]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:45.592000 audit[5544]: USER_END pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:45.599299 kernel: audit: type=1106 audit(1757118165.592:583): pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:45.592000 audit[5544]: CRED_DISP pid=5544 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:45.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-64.227.108.127:22-147.75.109.163:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:45.595750 systemd[1]: sshd@24-64.227.108.127:22-147.75.109.163:35772.service: Deactivated successfully. Sep 6 00:22:45.596689 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:22:45.600085 systemd-logind[1277]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:22:45.613083 systemd-logind[1277]: Removed session 23. Sep 6 00:22:50.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.227.108.127:22-147.75.109.163:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.570551 kernel: kauditd_printk_skb: 2 callbacks suppressed Sep 6 00:22:50.573827 kernel: audit: type=1130 audit(1757118170.568:586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.227.108.127:22-147.75.109.163:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:50.569092 systemd[1]: Started sshd@25-64.227.108.127:22-147.75.109.163:57476.service. Sep 6 00:22:50.679898 sshd[5558]: Accepted publickey for core from 147.75.109.163 port 57476 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:50.690225 kernel: audit: type=1101 audit(1757118170.679:587): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.690324 kernel: audit: type=1103 audit(1757118170.684:588): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.690355 kernel: audit: type=1006 audit(1757118170.684:589): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 6 00:22:50.679000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.684000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.691218 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:50.684000 audit[5558]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1a949540 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:50.698898 kernel: audit: type=1300 audit(1757118170.684:589): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1a949540 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:50.704216 systemd-logind[1277]: New session 24 of user core. Sep 6 00:22:50.706287 systemd[1]: Started session-24.scope. Sep 6 00:22:50.684000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:50.708739 kernel: audit: type=1327 audit(1757118170.684:589): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:50.713000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.721982 kernel: audit: type=1105 audit(1757118170.713:590): pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.716000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:50.727742 kernel: audit: type=1103 audit(1757118170.716:591): pid=5561 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:51.324856 sshd[5558]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:51.325000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:51.334285 kernel: audit: type=1106 audit(1757118171.325:592): pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:51.334390 kernel: audit: type=1104 audit(1757118171.325:593): pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:51.325000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:51.331803 systemd[1]: sshd@25-64.227.108.127:22-147.75.109.163:57476.service: Deactivated successfully. Sep 6 00:22:51.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-64.227.108.127:22-147.75.109.163:57476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:51.332937 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:22:51.334316 systemd-logind[1277]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:22:51.335438 systemd-logind[1277]: Removed session 24. Sep 6 00:22:56.331129 systemd[1]: Started sshd@26-64.227.108.127:22-147.75.109.163:57480.service. Sep 6 00:22:56.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.227.108.127:22-147.75.109.163:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.336176 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:22:56.336556 kernel: audit: type=1130 audit(1757118176.331:595): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.227.108.127:22-147.75.109.163:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:56.460138 sshd[5612]: Accepted publickey for core from 147.75.109.163 port 57480 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:56.458000 audit[5612]: USER_ACCT pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.463737 kernel: audit: type=1101 audit(1757118176.458:596): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.462000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.467756 kernel: audit: type=1103 audit(1757118176.462:597): pid=5612 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.469092 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:56.471879 kernel: audit: type=1006 audit(1757118176.463:598): pid=5612 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 00:22:56.463000 audit[5612]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f5cd7c0 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:56.489419 kernel: audit: type=1300 audit(1757118176.463:598): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f5cd7c0 a2=3 a3=0 items=0 ppid=1 pid=5612 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:56.493772 kernel: audit: type=1327 audit(1757118176.463:598): proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:56.463000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:22:56.482976 systemd[1]: Started session-25.scope. Sep 6 00:22:56.485156 systemd-logind[1277]: New session 25 of user core. Sep 6 00:22:56.520000 audit[5612]: USER_START pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.536443 kernel: audit: type=1105 audit(1757118176.520:599): pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.534000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:56.546751 kernel: audit: type=1103 audit(1757118176.534:600): pid=5615 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:58.388419 sshd[5612]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:58.413112 kernel: audit: type=1106 audit(1757118178.406:601): pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:58.406000 audit[5612]: USER_END pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:58.412000 audit[5612]: CRED_DISP pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:58.419260 kernel: audit: type=1104 audit(1757118178.412:602): pid=5612 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:22:58.422637 systemd[1]: sshd@26-64.227.108.127:22-147.75.109.163:57480.service: Deactivated successfully. Sep 6 00:22:58.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-64.227.108.127:22-147.75.109.163:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:58.432084 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:22:58.432680 systemd-logind[1277]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:22:58.445195 systemd-logind[1277]: Removed session 25.