Aug 13 00:51:30.217021 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:51:30.217054 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:30.217070 kernel: BIOS-provided physical RAM map: Aug 13 00:51:30.217078 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:51:30.217085 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:51:30.220180 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:51:30.220214 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:51:30.220223 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:51:30.220240 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:51:30.220247 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:51:30.220255 kernel: NX (Execute Disable) protection: active Aug 13 00:51:30.220265 kernel: SMBIOS 2.8 present. Aug 13 00:51:30.220277 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:51:30.220287 kernel: Hypervisor detected: KVM Aug 13 00:51:30.220300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:51:30.220318 kernel: kvm-clock: cpu 0, msr 5919e001, primary cpu clock Aug 13 00:51:30.220329 kernel: kvm-clock: using sched offset of 3518054037 cycles Aug 13 00:51:30.220342 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:51:30.220361 kernel: tsc: Detected 2494.140 MHz processor Aug 13 00:51:30.220373 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:51:30.220386 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:51:30.220398 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:51:30.220410 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:51:30.220428 kernel: ACPI: Early table checksum verification disabled Aug 13 00:51:30.220440 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:51:30.220453 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220465 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220475 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220486 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:51:30.220499 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220510 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220522 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220539 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:51:30.220551 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:51:30.220563 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:51:30.220575 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:51:30.220587 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:51:30.220598 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:51:30.220610 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:51:30.220621 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:51:30.220643 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:51:30.220655 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:51:30.220668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:51:30.220679 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:51:30.220692 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 00:51:30.220704 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 00:51:30.220721 kernel: Zone ranges: Aug 13 00:51:30.220735 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:51:30.220748 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:51:30.220762 kernel: Normal empty Aug 13 00:51:30.220776 kernel: Movable zone start for each node Aug 13 00:51:30.220788 kernel: Early memory node ranges Aug 13 00:51:30.220801 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:51:30.220813 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:51:30.220826 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:51:30.220844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:51:30.220865 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:51:30.220878 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:51:30.220891 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:51:30.220905 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:51:30.220919 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:51:30.220935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:51:30.220944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:51:30.220953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:51:30.220967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:51:30.220980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:51:30.220989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:51:30.220998 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:51:30.221006 kernel: TSC deadline timer available Aug 13 00:51:30.221015 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:51:30.221044 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:51:30.221053 kernel: Booting paravirtualized kernel on KVM Aug 13 00:51:30.221062 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:51:30.221075 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:51:30.221084 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:51:30.221111 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:51:30.221120 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:51:30.221129 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Aug 13 00:51:30.221137 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:51:30.221148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 00:51:30.221161 kernel: Policy zone: DMA32 Aug 13 00:51:30.221177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:30.221197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:51:30.221210 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:51:30.221225 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:51:30.221237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:51:30.221250 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 123076K reserved, 0K cma-reserved) Aug 13 00:51:30.221262 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:51:30.221274 kernel: Kernel/User page tables isolation: enabled Aug 13 00:51:30.221286 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:51:30.221307 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:51:30.221320 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:51:30.221334 kernel: rcu: RCU event tracing is enabled. Aug 13 00:51:30.221346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:51:30.221359 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:51:30.221372 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:51:30.221386 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:51:30.221398 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:51:30.221410 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:51:30.221429 kernel: random: crng init done Aug 13 00:51:30.221442 kernel: Console: colour VGA+ 80x25 Aug 13 00:51:30.221455 kernel: printk: console [tty0] enabled Aug 13 00:51:30.221469 kernel: printk: console [ttyS0] enabled Aug 13 00:51:30.221482 kernel: ACPI: Core revision 20210730 Aug 13 00:51:30.221496 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:51:30.221508 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:51:30.221518 kernel: x2apic enabled Aug 13 00:51:30.221528 kernel: Switched APIC routing to physical x2apic. Aug 13 00:51:30.221543 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:51:30.221565 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:51:30.221575 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 00:51:30.221592 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:51:30.221601 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:51:30.221612 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:51:30.221748 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:51:30.221763 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:51:30.221777 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:51:30.221798 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:51:30.221826 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:51:30.221840 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:51:30.221859 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:51:30.221873 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:51:30.221888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:51:30.221903 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:51:30.221920 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:51:30.221933 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:51:30.221947 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:51:30.221966 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:51:30.221981 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:51:30.221996 kernel: LSM: Security Framework initializing Aug 13 00:51:30.222008 kernel: SELinux: Initializing. Aug 13 00:51:30.222018 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:51:30.222027 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:51:30.222042 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:51:30.222063 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:51:30.222079 kernel: signal: max sigframe size: 1776 Aug 13 00:51:30.222111 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:51:30.222123 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:51:30.222138 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:51:30.222152 kernel: x86: Booting SMP configuration: Aug 13 00:51:30.222167 kernel: .... node #0, CPUs: #1 Aug 13 00:51:30.222184 kernel: kvm-clock: cpu 1, msr 5919e041, secondary cpu clock Aug 13 00:51:30.222201 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Aug 13 00:51:30.222223 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:51:30.222239 kernel: smpboot: Max logical packages: 1 Aug 13 00:51:30.222256 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 00:51:30.222272 kernel: devtmpfs: initialized Aug 13 00:51:30.222288 kernel: x86/mm: Memory block size: 128MB Aug 13 00:51:30.222303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:51:30.222319 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:51:30.222335 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:51:30.222352 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:51:30.222373 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:51:30.222388 kernel: audit: type=2000 audit(1755046289.355:1): state=initialized audit_enabled=0 res=1 Aug 13 00:51:30.222404 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:51:30.222417 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:51:30.222431 kernel: cpuidle: using governor menu Aug 13 00:51:30.222446 kernel: ACPI: bus type PCI registered Aug 13 00:51:30.222461 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:51:30.222470 kernel: dca service started, version 1.12.1 Aug 13 00:51:30.222479 kernel: PCI: Using configuration type 1 for base access Aug 13 00:51:30.222494 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:51:30.222504 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:51:30.222513 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:51:30.222523 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:51:30.222558 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:51:30.222593 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:51:30.222603 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:51:30.222612 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:51:30.222621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:51:30.222638 kernel: ACPI: Interpreter enabled Aug 13 00:51:30.222651 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:51:30.222663 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:51:30.222675 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:51:30.222689 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:51:30.222703 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:51:30.223029 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:51:30.227389 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 00:51:30.227465 kernel: acpiphp: Slot [3] registered Aug 13 00:51:30.227485 kernel: acpiphp: Slot [4] registered Aug 13 00:51:30.227502 kernel: acpiphp: Slot [5] registered Aug 13 00:51:30.227519 kernel: acpiphp: Slot [6] registered Aug 13 00:51:30.227536 kernel: acpiphp: Slot [7] registered Aug 13 00:51:30.227553 kernel: acpiphp: Slot [8] registered Aug 13 00:51:30.227569 kernel: acpiphp: Slot [9] registered Aug 13 00:51:30.227586 kernel: acpiphp: Slot [10] registered Aug 13 00:51:30.227602 kernel: acpiphp: Slot [11] registered Aug 13 00:51:30.227625 kernel: acpiphp: Slot [12] registered Aug 13 00:51:30.227642 kernel: acpiphp: Slot [13] registered Aug 13 00:51:30.227659 kernel: acpiphp: Slot [14] registered Aug 13 00:51:30.227676 kernel: acpiphp: Slot [15] registered Aug 13 00:51:30.227692 kernel: acpiphp: Slot [16] registered Aug 13 00:51:30.227709 kernel: acpiphp: Slot [17] registered Aug 13 00:51:30.227726 kernel: acpiphp: Slot [18] registered Aug 13 00:51:30.227742 kernel: acpiphp: Slot [19] registered Aug 13 00:51:30.227759 kernel: acpiphp: Slot [20] registered Aug 13 00:51:30.227775 kernel: acpiphp: Slot [21] registered Aug 13 00:51:30.227784 kernel: acpiphp: Slot [22] registered Aug 13 00:51:30.227794 kernel: acpiphp: Slot [23] registered Aug 13 00:51:30.227805 kernel: acpiphp: Slot [24] registered Aug 13 00:51:30.227821 kernel: acpiphp: Slot [25] registered Aug 13 00:51:30.227835 kernel: acpiphp: Slot [26] registered Aug 13 00:51:30.227844 kernel: acpiphp: Slot [27] registered Aug 13 00:51:30.227854 kernel: acpiphp: Slot [28] registered Aug 13 00:51:30.227864 kernel: acpiphp: Slot [29] registered Aug 13 00:51:30.227873 kernel: acpiphp: Slot [30] registered Aug 13 00:51:30.227887 kernel: acpiphp: Slot [31] registered Aug 13 00:51:30.227897 kernel: PCI host bridge to bus 0000:00 Aug 13 00:51:30.228117 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:51:30.228261 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:51:30.228370 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:51:30.228464 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:51:30.228559 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:51:30.228673 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:51:30.228839 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 00:51:30.228972 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 00:51:30.231196 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 00:51:30.231539 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 00:51:30.231759 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 00:51:30.231950 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 00:51:30.232272 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 00:51:30.232482 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 00:51:30.232758 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 00:51:30.233007 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 00:51:30.237386 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 00:51:30.237536 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:51:30.237782 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:51:30.237972 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 00:51:30.238195 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:51:30.238402 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:51:30.238539 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 00:51:30.238640 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:51:30.238766 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:51:30.238898 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:51:30.239023 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 00:51:30.239134 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 00:51:30.239246 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:51:30.239570 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:51:30.240202 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 00:51:30.240407 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 00:51:30.240550 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:51:30.240752 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:51:30.240964 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 00:51:30.241082 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 00:51:30.241289 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:51:30.241416 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:51:30.241534 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:51:30.241710 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 00:51:30.241864 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:51:30.243214 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:51:30.243415 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 00:51:30.243611 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 00:51:30.243725 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:51:30.243941 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 00:51:30.244077 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 00:51:30.244249 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:51:30.244269 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:51:30.244284 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:51:30.244296 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:51:30.244306 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:51:30.244325 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:51:30.244336 kernel: iommu: Default domain type: Translated Aug 13 00:51:30.244351 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:51:30.244520 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:51:30.244668 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:51:30.244799 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:51:30.244813 kernel: vgaarb: loaded Aug 13 00:51:30.244823 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:51:30.244834 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:51:30.244851 kernel: PTP clock support registered Aug 13 00:51:30.244861 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:51:30.244871 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:51:30.244924 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:51:30.244941 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:51:30.244957 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:51:30.244971 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:51:30.244985 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:51:30.244999 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:51:30.245025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:51:30.245039 kernel: pnp: PnP ACPI init Aug 13 00:51:30.245051 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:51:30.245066 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:51:30.245080 kernel: NET: Registered PF_INET protocol family Aug 13 00:51:30.245119 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:51:30.245137 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:51:30.245153 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:51:30.245175 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:51:30.245189 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 00:51:30.245203 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:51:30.245217 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:51:30.245231 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:51:30.245244 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:51:30.245257 kernel: NET: Registered PF_XDP protocol family Aug 13 00:51:30.245462 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:51:30.245641 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:51:30.245807 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:51:30.245958 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:51:30.249259 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:51:30.249529 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:51:30.249738 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:51:30.249876 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Aug 13 00:51:30.249892 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:51:30.250000 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 33071 usecs Aug 13 00:51:30.250035 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:51:30.250049 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:51:30.250063 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:51:30.250077 kernel: Initialise system trusted keyrings Aug 13 00:51:30.250091 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:51:30.252167 kernel: Key type asymmetric registered Aug 13 00:51:30.252226 kernel: Asymmetric key parser 'x509' registered Aug 13 00:51:30.252244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:51:30.252260 kernel: io scheduler mq-deadline registered Aug 13 00:51:30.252294 kernel: io scheduler kyber registered Aug 13 00:51:30.252311 kernel: io scheduler bfq registered Aug 13 00:51:30.252325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:51:30.252341 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:51:30.252358 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:51:30.252374 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:51:30.252388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:51:30.252402 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:51:30.252415 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:51:30.252436 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:51:30.252448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:51:30.252461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:51:30.252757 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:51:30.252876 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:51:30.253008 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:51:29 UTC (1755046289) Aug 13 00:51:30.253135 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:51:30.253167 kernel: intel_pstate: CPU model not supported Aug 13 00:51:30.253178 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:51:30.253188 kernel: Segment Routing with IPv6 Aug 13 00:51:30.253198 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:51:30.253207 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:51:30.253217 kernel: Key type dns_resolver registered Aug 13 00:51:30.253227 kernel: IPI shorthand broadcast: enabled Aug 13 00:51:30.253237 kernel: sched_clock: Marking stable (709727559, 83264147)->(910374157, -117382451) Aug 13 00:51:30.253246 kernel: registered taskstats version 1 Aug 13 00:51:30.253256 kernel: Loading compiled-in X.509 certificates Aug 13 00:51:30.253270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:51:30.253279 kernel: Key type .fscrypt registered Aug 13 00:51:30.253289 kernel: Key type fscrypt-provisioning registered Aug 13 00:51:30.253299 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:51:30.253308 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:51:30.253318 kernel: ima: No architecture policies found Aug 13 00:51:30.253327 kernel: clk: Disabling unused clocks Aug 13 00:51:30.253337 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:51:30.253350 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:51:30.253361 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:51:30.253370 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:51:30.253380 kernel: Run /init as init process Aug 13 00:51:30.253390 kernel: with arguments: Aug 13 00:51:30.253404 kernel: /init Aug 13 00:51:30.253450 kernel: with environment: Aug 13 00:51:30.253463 kernel: HOME=/ Aug 13 00:51:30.253474 kernel: TERM=linux Aug 13 00:51:30.253484 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:51:30.253502 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:30.253515 systemd[1]: Detected virtualization kvm. Aug 13 00:51:30.253526 systemd[1]: Detected architecture x86-64. Aug 13 00:51:30.253581 systemd[1]: Running in initrd. Aug 13 00:51:30.253594 systemd[1]: No hostname configured, using default hostname. Aug 13 00:51:30.253606 systemd[1]: Hostname set to . Aug 13 00:51:30.253742 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:51:30.253757 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:51:30.253767 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:30.253776 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:30.253787 systemd[1]: Reached target paths.target. Aug 13 00:51:30.253797 systemd[1]: Reached target slices.target. Aug 13 00:51:30.253807 systemd[1]: Reached target swap.target. Aug 13 00:51:30.253817 systemd[1]: Reached target timers.target. Aug 13 00:51:30.253832 systemd[1]: Listening on iscsid.socket. Aug 13 00:51:30.253842 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:51:30.253853 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:30.253863 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:30.253873 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:30.253883 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:30.253894 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:30.253905 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:30.253915 systemd[1]: Reached target sockets.target. Aug 13 00:51:30.253928 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:30.253939 systemd[1]: Finished network-cleanup.service. Aug 13 00:51:30.253952 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:51:30.253967 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:30.253977 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:30.253990 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:30.254000 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:51:30.254010 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:30.254029 systemd-journald[183]: Journal started Aug 13 00:51:30.257211 systemd-journald[183]: Runtime Journal (/run/log/journal/0bf1af7c168e4502a9d7609cf281995a) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:51:30.247322 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 00:51:30.270144 systemd[1]: Started systemd-journald.service. Aug 13 00:51:30.270250 kernel: audit: type=1130 audit(1755046290.264:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.281203 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:51:30.288316 kernel: audit: type=1130 audit(1755046290.280:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.288359 kernel: audit: type=1130 audit(1755046290.284:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.285551 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:51:30.296502 kernel: audit: type=1130 audit(1755046290.288:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.290594 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:51:30.293535 systemd-resolved[185]: Positive Trust Anchors: Aug 13 00:51:30.293549 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:30.293606 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:30.304154 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:30.308690 systemd-resolved[185]: Defaulting to hostname 'linux'. Aug 13 00:51:30.310602 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:30.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.319216 kernel: audit: type=1130 audit(1755046290.311:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.322083 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:30.332149 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:51:30.334689 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:51:30.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.340707 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:51:30.348403 kernel: audit: type=1130 audit(1755046290.335:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.348458 kernel: Bridge firewalling registered Aug 13 00:51:30.348474 kernel: audit: type=1130 audit(1755046290.344:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.341008 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 00:51:30.343941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:30.362277 dracut-cmdline[200]: dracut-dracut-053 Aug 13 00:51:30.366599 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:51:30.384138 kernel: SCSI subsystem initialized Aug 13 00:51:30.398303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:51:30.398393 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:51:30.398419 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:51:30.402937 systemd-modules-load[184]: Inserted module 'dm_multipath' Aug 13 00:51:30.403871 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:30.405417 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:30.409988 kernel: audit: type=1130 audit(1755046290.404:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.419973 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:30.423582 kernel: audit: type=1130 audit(1755046290.420:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.481144 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:51:30.502179 kernel: iscsi: registered transport (tcp) Aug 13 00:51:30.537159 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:51:30.537253 kernel: QLogic iSCSI HBA Driver Aug 13 00:51:30.598429 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:51:30.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:30.601168 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:51:30.667187 kernel: raid6: avx2x4 gen() 21341 MB/s Aug 13 00:51:30.684178 kernel: raid6: avx2x4 xor() 7536 MB/s Aug 13 00:51:30.701185 kernel: raid6: avx2x2 gen() 17907 MB/s Aug 13 00:51:30.718181 kernel: raid6: avx2x2 xor() 16866 MB/s Aug 13 00:51:30.735179 kernel: raid6: avx2x1 gen() 16598 MB/s Aug 13 00:51:30.752174 kernel: raid6: avx2x1 xor() 18302 MB/s Aug 13 00:51:30.769172 kernel: raid6: sse2x4 gen() 10656 MB/s Aug 13 00:51:30.786186 kernel: raid6: sse2x4 xor() 5679 MB/s Aug 13 00:51:30.803211 kernel: raid6: sse2x2 gen() 9536 MB/s Aug 13 00:51:30.820179 kernel: raid6: sse2x2 xor() 7584 MB/s Aug 13 00:51:30.837176 kernel: raid6: sse2x1 gen() 9260 MB/s Aug 13 00:51:30.854509 kernel: raid6: sse2x1 xor() 5176 MB/s Aug 13 00:51:30.854671 kernel: raid6: using algorithm avx2x4 gen() 21341 MB/s Aug 13 00:51:30.854695 kernel: raid6: .... xor() 7536 MB/s, rmw enabled Aug 13 00:51:30.855731 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:51:30.871142 kernel: xor: automatically using best checksumming function avx Aug 13 00:51:31.007172 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:51:31.024184 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:51:31.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:31.025000 audit: BPF prog-id=7 op=LOAD Aug 13 00:51:31.025000 audit: BPF prog-id=8 op=LOAD Aug 13 00:51:31.026848 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:31.047798 systemd-udevd[383]: Using default interface naming scheme 'v252'. Aug 13 00:51:31.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:31.055183 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:31.057110 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:51:31.083457 dracut-pre-trigger[385]: rd.md=0: removing MD RAID activation Aug 13 00:51:31.147799 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:51:31.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:31.149359 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:31.215585 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:31.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:31.297154 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:51:31.330788 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:51:31.331039 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:51:31.331056 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:51:31.331069 kernel: GPT:9289727 != 125829119 Aug 13 00:51:31.331080 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:51:31.331105 kernel: GPT:9289727 != 125829119 Aug 13 00:51:31.331117 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:51:31.331133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:51:31.336218 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 00:51:31.370136 kernel: libata version 3.00 loaded. Aug 13 00:51:31.374167 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:51:31.420538 kernel: ACPI: bus type USB registered Aug 13 00:51:31.420578 kernel: usbcore: registered new interface driver usbfs Aug 13 00:51:31.420600 kernel: usbcore: registered new interface driver hub Aug 13 00:51:31.420620 kernel: usbcore: registered new device driver usb Aug 13 00:51:31.420640 kernel: scsi host1: ata_piix Aug 13 00:51:31.420918 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (434) Aug 13 00:51:31.420940 kernel: scsi host2: ata_piix Aug 13 00:51:31.421183 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 00:51:31.421206 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 00:51:31.426659 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:51:31.430149 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 13 00:51:31.431892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:51:31.438252 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:51:31.444140 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:51:31.446132 kernel: AES CTR mode by8 optimization enabled Aug 13 00:51:31.449511 systemd[1]: Starting disk-uuid.service... Aug 13 00:51:31.454602 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:51:31.465015 disk-uuid[463]: Primary Header is updated. Aug 13 00:51:31.465015 disk-uuid[463]: Secondary Entries is updated. Aug 13 00:51:31.465015 disk-uuid[463]: Secondary Header is updated. Aug 13 00:51:31.484757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:31.589144 kernel: ehci-pci: EHCI PCI platform driver Aug 13 00:51:31.597137 kernel: uhci_hcd: USB Universal Host Controller Interface driver Aug 13 00:51:31.620620 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:51:31.624202 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:51:31.624394 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:51:31.624573 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Aug 13 00:51:31.624718 kernel: hub 1-0:1.0: USB hub found Aug 13 00:51:31.624877 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:51:32.473126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:51:32.474203 disk-uuid[465]: The operation has completed successfully. Aug 13 00:51:32.515115 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:51:32.515909 systemd[1]: Finished disk-uuid.service. Aug 13 00:51:32.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.521325 systemd[1]: Starting verity-setup.service... Aug 13 00:51:32.541141 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:51:32.593815 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:51:32.595314 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:51:32.596621 systemd[1]: Finished verity-setup.service. Aug 13 00:51:32.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.686135 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:51:32.686525 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:51:32.686970 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:51:32.687792 systemd[1]: Starting ignition-setup.service... Aug 13 00:51:32.689015 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:51:32.707535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:32.707631 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:51:32.707665 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:51:32.722645 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:51:32.729654 systemd[1]: Finished ignition-setup.service. Aug 13 00:51:32.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.731638 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:51:32.867065 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:51:32.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.868000 audit: BPF prog-id=9 op=LOAD Aug 13 00:51:32.869782 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:32.901177 ignition[603]: Ignition 2.14.0 Aug 13 00:51:32.902082 ignition[603]: Stage: fetch-offline Aug 13 00:51:32.902723 ignition[603]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:32.903522 ignition[603]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:32.909726 ignition[603]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:32.910814 ignition[603]: parsed url from cmdline: "" Aug 13 00:51:32.910922 ignition[603]: no config URL provided Aug 13 00:51:32.911434 ignition[603]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:32.912087 ignition[603]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:32.912640 ignition[603]: failed to fetch config: resource requires networking Aug 13 00:51:32.913686 ignition[603]: Ignition finished successfully Aug 13 00:51:32.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.916386 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:51:32.916542 systemd-networkd[689]: lo: Link UP Aug 13 00:51:32.916551 systemd-networkd[689]: lo: Gained carrier Aug 13 00:51:32.917531 systemd-networkd[689]: Enumeration completed Aug 13 00:51:32.918217 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:51:32.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.920042 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:51:32.921551 systemd-networkd[689]: eth1: Link UP Aug 13 00:51:32.921556 systemd-networkd[689]: eth1: Gained carrier Aug 13 00:51:32.921734 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:32.922823 systemd[1]: Reached target network.target. Aug 13 00:51:32.924935 systemd[1]: Starting ignition-fetch.service... Aug 13 00:51:32.926285 systemd-networkd[689]: eth0: Link UP Aug 13 00:51:32.926291 systemd-networkd[689]: eth0: Gained carrier Aug 13 00:51:32.929237 systemd[1]: Starting iscsiuio.service... Aug 13 00:51:32.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.947314 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Aug 13 00:51:32.950717 systemd[1]: Started iscsiuio.service. Aug 13 00:51:32.953331 systemd-networkd[689]: eth0: DHCPv4 address 137.184.32.218/20, gateway 137.184.32.1 acquired from 169.254.169.253 Aug 13 00:51:32.956195 systemd[1]: Starting iscsid.service... Aug 13 00:51:32.964075 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:32.964075 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:51:32.964075 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:51:32.964075 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:51:32.964075 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:51:32.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.967603 systemd[1]: Started iscsid.service. Aug 13 00:51:32.972040 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:51:32.969687 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:51:32.974853 ignition[691]: Ignition 2.14.0 Aug 13 00:51:32.974870 ignition[691]: Stage: fetch Aug 13 00:51:32.975153 ignition[691]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:32.975184 ignition[691]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:32.980918 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:32.981071 ignition[691]: parsed url from cmdline: "" Aug 13 00:51:32.981076 ignition[691]: no config URL provided Aug 13 00:51:32.981086 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:51:32.981107 ignition[691]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:51:32.981160 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:51:32.995481 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:51:32.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:32.996249 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:51:32.996707 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:32.997165 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:32.998978 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:51:33.003324 ignition[691]: GET result: OK Aug 13 00:51:33.004027 ignition[691]: parsing config with SHA512: f6cc66c936c18b1ec7edb3ea5e866453c2e3e618e0e66b1c0e878770e5b29a8bd38493e0e1e7b2f4137aab054a8a6b9fdc297d90d9629baa6a127f76bec12680 Aug 13 00:51:33.013195 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:51:33.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.022051 unknown[691]: fetched base config from "system" Aug 13 00:51:33.022872 unknown[691]: fetched base config from "system" Aug 13 00:51:33.023684 unknown[691]: fetched user config from "digitalocean" Aug 13 00:51:33.025437 ignition[691]: fetch: fetch complete Aug 13 00:51:33.026072 ignition[691]: fetch: fetch passed Aug 13 00:51:33.026677 ignition[691]: Ignition finished successfully Aug 13 00:51:33.028889 systemd[1]: Finished ignition-fetch.service. Aug 13 00:51:33.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.030984 systemd[1]: Starting ignition-kargs.service... Aug 13 00:51:33.046281 ignition[714]: Ignition 2.14.0 Aug 13 00:51:33.046294 ignition[714]: Stage: kargs Aug 13 00:51:33.046478 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:33.046500 ignition[714]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:33.048829 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:33.051312 ignition[714]: kargs: kargs passed Aug 13 00:51:33.051410 ignition[714]: Ignition finished successfully Aug 13 00:51:33.053032 systemd[1]: Finished ignition-kargs.service. Aug 13 00:51:33.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.055248 systemd[1]: Starting ignition-disks.service... Aug 13 00:51:33.066794 ignition[720]: Ignition 2.14.0 Aug 13 00:51:33.066813 ignition[720]: Stage: disks Aug 13 00:51:33.066966 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:33.066988 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:33.069282 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:33.072834 ignition[720]: disks: disks passed Aug 13 00:51:33.074144 ignition[720]: Ignition finished successfully Aug 13 00:51:33.075931 systemd[1]: Finished ignition-disks.service. Aug 13 00:51:33.076601 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:51:33.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.077061 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:33.077794 systemd[1]: Reached target local-fs.target. Aug 13 00:51:33.078421 systemd[1]: Reached target sysinit.target. Aug 13 00:51:33.078988 systemd[1]: Reached target basic.target. Aug 13 00:51:33.081039 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:51:33.102231 systemd-fsck[728]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:51:33.106564 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:51:33.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.109246 systemd[1]: Mounting sysroot.mount... Aug 13 00:51:33.119163 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:51:33.119802 systemd[1]: Mounted sysroot.mount. Aug 13 00:51:33.121126 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:51:33.124103 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:51:33.126590 systemd[1]: Starting flatcar-digitalocean-network.service... Aug 13 00:51:33.129917 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:51:33.131326 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:51:33.132260 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:51:33.134900 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:51:33.136746 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:51:33.149673 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:51:33.164643 initrd-setup-root[748]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:51:33.174929 initrd-setup-root[758]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:51:33.180857 initrd-setup-root[766]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:51:33.255995 coreos-metadata[735]: Aug 13 00:51:33.255 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:33.258218 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:51:33.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.259674 systemd[1]: Starting ignition-mount.service... Aug 13 00:51:33.261066 systemd[1]: Starting sysroot-boot.service... Aug 13 00:51:33.275375 coreos-metadata[734]: Aug 13 00:51:33.274 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:33.277570 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:51:33.283294 coreos-metadata[735]: Aug 13 00:51:33.283 INFO Fetch successful Aug 13 00:51:33.284069 coreos-metadata[734]: Aug 13 00:51:33.284 INFO Fetch successful Aug 13 00:51:33.291250 coreos-metadata[735]: Aug 13 00:51:33.291 INFO wrote hostname ci-3510.3.8-8-adc8b0fbd5 to /sysroot/etc/hostname Aug 13 00:51:33.292122 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:51:33.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.293438 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 00:51:33.293530 systemd[1]: Finished flatcar-digitalocean-network.service. Aug 13 00:51:33.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.299170 ignition[787]: INFO : Ignition 2.14.0 Aug 13 00:51:33.300353 ignition[787]: INFO : Stage: mount Aug 13 00:51:33.300353 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:33.301336 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:33.302715 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:33.304365 ignition[787]: INFO : mount: mount passed Aug 13 00:51:33.304881 ignition[787]: INFO : Ignition finished successfully Aug 13 00:51:33.306778 systemd[1]: Finished ignition-mount.service. Aug 13 00:51:33.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.310788 systemd[1]: Finished sysroot-boot.service. Aug 13 00:51:33.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:33.612650 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:51:33.625000 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (794) Aug 13 00:51:33.625071 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:51:33.625084 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:51:33.625773 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:51:33.631916 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:51:33.640793 systemd[1]: Starting ignition-files.service... Aug 13 00:51:33.661841 ignition[814]: INFO : Ignition 2.14.0 Aug 13 00:51:33.661841 ignition[814]: INFO : Stage: files Aug 13 00:51:33.663262 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:33.663262 ignition[814]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:33.664875 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:33.671204 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:51:33.671993 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:51:33.671993 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:51:33.674921 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:51:33.675673 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:51:33.676378 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:51:33.676057 unknown[814]: wrote ssh authorized keys file for user: core Aug 13 00:51:33.678144 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:33.678144 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:51:33.678144 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:33.678144 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:51:33.726736 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:51:33.837895 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:51:33.839357 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:33.840107 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:51:33.840107 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:33.842166 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:51:33.842166 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:33.842166 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:51:33.842166 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:33.842166 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:33.845433 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:51:34.255638 systemd-networkd[689]: eth0: Gained IPv6LL Aug 13 00:51:34.297647 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:51:34.668581 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:51:34.669805 ignition[814]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:51:34.670403 ignition[814]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:51:34.670926 ignition[814]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 00:51:34.672403 ignition[814]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:34.673672 ignition[814]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:51:34.673672 ignition[814]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:34.675140 ignition[814]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:51:34.680960 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:34.680960 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:51:34.680960 ignition[814]: INFO : files: files passed Aug 13 00:51:34.680960 ignition[814]: INFO : Ignition finished successfully Aug 13 00:51:34.691676 kernel: kauditd_printk_skb: 29 callbacks suppressed Aug 13 00:51:34.691718 kernel: audit: type=1130 audit(1755046294.682:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.681642 systemd[1]: Finished ignition-files.service. Aug 13 00:51:34.685254 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:51:34.688705 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:51:34.691212 systemd[1]: Starting ignition-quench.service... Aug 13 00:51:34.696747 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:51:34.705030 kernel: audit: type=1130 audit(1755046294.697:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.705077 kernel: audit: type=1131 audit(1755046294.697:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.696921 systemd[1]: Finished ignition-quench.service. Aug 13 00:51:34.705772 initrd-setup-root-after-ignition[839]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:51:34.706644 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:51:34.710246 kernel: audit: type=1130 audit(1755046294.707:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.707304 systemd[1]: Reached target ignition-complete.target. Aug 13 00:51:34.711660 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:51:34.736958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:51:34.737886 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:51:34.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.742524 systemd[1]: Reached target initrd-fs.target. Aug 13 00:51:34.744075 kernel: audit: type=1130 audit(1755046294.738:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.744120 kernel: audit: type=1131 audit(1755046294.740:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.743739 systemd[1]: Reached target initrd.target. Aug 13 00:51:34.744418 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:51:34.745947 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:51:34.768405 systemd-networkd[689]: eth1: Gained IPv6LL Aug 13 00:51:34.773752 kernel: audit: type=1130 audit(1755046294.770:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.769964 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:51:34.771586 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:51:34.785063 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:51:34.786226 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:51:34.787274 systemd[1]: Stopped target timers.target. Aug 13 00:51:34.794417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:51:34.795200 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:51:34.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.798581 systemd[1]: Stopped target initrd.target. Aug 13 00:51:34.799326 kernel: audit: type=1131 audit(1755046294.796:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.799153 systemd[1]: Stopped target basic.target. Aug 13 00:51:34.799617 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:51:34.800378 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:51:34.800951 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:51:34.801810 systemd[1]: Stopped target remote-fs.target. Aug 13 00:51:34.802617 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:51:34.803353 systemd[1]: Stopped target sysinit.target. Aug 13 00:51:34.804120 systemd[1]: Stopped target local-fs.target. Aug 13 00:51:34.804821 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:51:34.805549 systemd[1]: Stopped target swap.target. Aug 13 00:51:34.806316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:51:34.811192 kernel: audit: type=1131 audit(1755046294.806:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.806471 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:51:34.807181 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:51:34.811703 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:51:34.816860 kernel: audit: type=1131 audit(1755046294.813:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.811937 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:51:34.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.813356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:51:34.813520 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:51:34.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.817781 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:51:34.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.818086 systemd[1]: Stopped ignition-files.service. Aug 13 00:51:34.819219 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:51:34.819485 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:51:34.822239 systemd[1]: Stopping ignition-mount.service... Aug 13 00:51:34.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.825058 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:51:34.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.825688 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:51:34.826368 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:51:34.827197 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:51:34.827378 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:51:34.831740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:51:34.831895 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:51:34.847908 ignition[852]: INFO : Ignition 2.14.0 Aug 13 00:51:34.849136 ignition[852]: INFO : Stage: umount Aug 13 00:51:34.850263 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:51:34.851034 ignition[852]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:51:34.854411 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:51:34.857204 ignition[852]: INFO : umount: umount passed Aug 13 00:51:34.857216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:51:34.858275 ignition[852]: INFO : Ignition finished successfully Aug 13 00:51:34.860296 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:51:34.860505 systemd[1]: Stopped ignition-mount.service. Aug 13 00:51:34.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.861505 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:51:34.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.861745 systemd[1]: Stopped ignition-disks.service. Aug 13 00:51:34.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.862283 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:51:34.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.862353 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:51:34.862897 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:51:34.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.862950 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:51:34.863692 systemd[1]: Stopped target network.target. Aug 13 00:51:34.864398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:51:34.864453 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:51:34.865148 systemd[1]: Stopped target paths.target. Aug 13 00:51:34.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.866302 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:51:34.869259 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:51:34.869886 systemd[1]: Stopped target slices.target. Aug 13 00:51:34.870802 systemd[1]: Stopped target sockets.target. Aug 13 00:51:34.871383 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:51:34.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.871440 systemd[1]: Closed iscsid.socket. Aug 13 00:51:34.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.872135 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:51:34.872176 systemd[1]: Closed iscsiuio.socket. Aug 13 00:51:34.872903 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:51:34.872973 systemd[1]: Stopped ignition-setup.service. Aug 13 00:51:34.874231 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:51:34.875121 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:51:34.876020 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:51:34.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.876147 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:51:34.877522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:51:34.877692 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:51:34.880288 systemd-networkd[689]: eth1: DHCPv6 lease lost Aug 13 00:51:34.880742 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:51:34.880900 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:51:34.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.882739 systemd-networkd[689]: eth0: DHCPv6 lease lost Aug 13 00:51:34.886000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:51:34.884767 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:51:34.884952 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:51:34.886610 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:51:34.888000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:51:34.886672 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:51:34.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.889016 systemd[1]: Stopping network-cleanup.service... Aug 13 00:51:34.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.889568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:51:34.889674 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:51:34.890269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:51:34.890341 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:51:34.890937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:51:34.890994 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:51:34.891901 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:51:34.900482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:51:34.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.905130 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:51:34.905378 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:51:34.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.906456 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:51:34.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.906509 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:51:34.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.906990 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:51:34.907034 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:51:34.907724 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:51:34.907787 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:51:34.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.908564 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:51:34.908616 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:51:34.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.909668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:51:34.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.909730 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:51:34.911871 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:51:34.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.912407 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:51:34.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:34.912520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:51:34.913521 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:51:34.913685 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:51:34.914678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:51:34.914731 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:51:34.917628 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:51:34.918532 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:51:34.918661 systemd[1]: Stopped network-cleanup.service. Aug 13 00:51:34.929162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:51:34.929326 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:51:34.930011 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:51:34.932680 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:51:34.947598 systemd[1]: Switching root. Aug 13 00:51:34.950000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:51:34.950000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:51:34.950000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:51:34.954000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:51:34.954000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:51:34.975814 iscsid[699]: iscsid shutting down. Aug 13 00:51:34.976400 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Aug 13 00:51:34.976493 systemd-journald[183]: Journal stopped Aug 13 00:51:38.950320 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:51:38.950422 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:51:38.950438 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:51:38.950465 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:51:38.950483 kernel: SELinux: policy capability open_perms=1 Aug 13 00:51:38.950504 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:51:38.950525 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:51:38.950538 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:51:38.950549 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:51:38.950567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:51:38.950580 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:51:38.950599 systemd[1]: Successfully loaded SELinux policy in 50.590ms. Aug 13 00:51:38.950627 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.264ms. Aug 13 00:51:38.950644 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:51:38.950661 systemd[1]: Detected virtualization kvm. Aug 13 00:51:38.950674 systemd[1]: Detected architecture x86-64. Aug 13 00:51:38.950687 systemd[1]: Detected first boot. Aug 13 00:51:38.950702 systemd[1]: Hostname set to . Aug 13 00:51:38.950715 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:51:38.950730 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:51:38.950743 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:51:38.950762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:38.950776 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:38.950791 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:38.950810 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:51:38.950823 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:51:38.950837 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:51:38.950850 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:51:38.950863 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 00:51:38.950881 systemd[1]: Created slice system-getty.slice. Aug 13 00:51:38.950894 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:51:38.950906 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:51:38.950922 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:51:38.950937 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:51:38.950950 systemd[1]: Created slice user.slice. Aug 13 00:51:38.950966 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:51:38.950984 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:51:38.950999 systemd[1]: Set up automount boot.automount. Aug 13 00:51:38.951016 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:51:38.951029 systemd[1]: Reached target integritysetup.target. Aug 13 00:51:38.951044 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:51:38.951068 systemd[1]: Reached target remote-fs.target. Aug 13 00:51:38.951082 systemd[1]: Reached target slices.target. Aug 13 00:51:38.951095 systemd[1]: Reached target swap.target. Aug 13 00:51:38.951123 systemd[1]: Reached target torcx.target. Aug 13 00:51:38.951143 systemd[1]: Reached target veritysetup.target. Aug 13 00:51:38.951159 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:51:38.951179 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:51:38.951200 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:51:38.951220 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:51:38.951241 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:51:38.951254 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:51:38.951268 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:51:38.951288 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:51:38.951305 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:51:38.951318 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:51:38.951331 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:51:38.951345 systemd[1]: Mounting media.mount... Aug 13 00:51:38.951359 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:38.951379 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:51:38.951400 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:51:38.951419 systemd[1]: Mounting tmp.mount... Aug 13 00:51:38.951434 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:51:38.951451 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:38.951464 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:51:38.951477 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:51:38.951494 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:38.951507 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:38.951520 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:38.951533 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:51:38.951547 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:38.951566 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:51:38.951583 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:51:38.951597 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:51:38.951609 systemd[1]: Starting systemd-journald.service... Aug 13 00:51:38.951623 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:51:38.951638 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:51:38.951659 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:51:38.951675 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:51:38.951689 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:38.951703 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:51:38.951721 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:51:38.951735 systemd[1]: Mounted media.mount. Aug 13 00:51:38.951747 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:51:38.951760 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:51:38.951774 systemd[1]: Mounted tmp.mount. Aug 13 00:51:38.951787 kernel: loop: module loaded Aug 13 00:51:38.951802 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:51:38.951815 kernel: fuse: init (API version 7.34) Aug 13 00:51:38.951828 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:51:38.951844 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:51:38.951881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:38.951895 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:38.951908 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:38.951921 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:38.951934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:38.951947 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:38.951960 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:51:38.951973 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:51:38.951989 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:38.952002 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:38.952018 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:51:38.952037 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:51:38.952056 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:51:38.952079 systemd[1]: Reached target network-pre.target. Aug 13 00:51:38.971863 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:51:38.971934 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:51:38.971952 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:51:38.971965 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:51:38.971984 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:38.972004 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:51:38.972024 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:38.972044 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:51:38.972071 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:51:38.972084 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:51:38.972137 systemd-journald[990]: Journal started Aug 13 00:51:38.972239 systemd-journald[990]: Runtime Journal (/run/log/journal/0bf1af7c168e4502a9d7609cf281995a) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:51:38.724000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:51:38.724000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:51:38.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.946000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:51:38.946000 audit[990]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd6b372bf0 a2=4000 a3=7ffd6b372c8c items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:38.946000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:51:38.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.975293 systemd[1]: Started systemd-journald.service. Aug 13 00:51:38.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.990437 systemd-journald[990]: Time spent on flushing to /var/log/journal/0bf1af7c168e4502a9d7609cf281995a is 52.286ms for 1079 entries. Aug 13 00:51:38.990437 systemd-journald[990]: System Journal (/var/log/journal/0bf1af7c168e4502a9d7609cf281995a) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:51:39.059601 systemd-journald[990]: Received client request to flush runtime journal. Aug 13 00:51:39.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.976819 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:51:39.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:38.987095 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:51:38.987732 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:51:39.011397 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:51:39.061220 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:51:39.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.096739 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:51:39.099175 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:51:39.101972 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:51:39.105546 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:51:39.126232 udevadm[1043]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:51:39.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.145426 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:51:39.148114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:51:39.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.187176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:51:39.881391 kernel: kauditd_printk_skb: 77 callbacks suppressed Aug 13 00:51:39.881557 kernel: audit: type=1130 audit(1755046299.878:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.878314 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:51:39.881347 systemd[1]: Starting systemd-udevd.service... Aug 13 00:51:39.913912 systemd-udevd[1052]: Using default interface naming scheme 'v252'. Aug 13 00:51:39.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.945583 systemd[1]: Started systemd-udevd.service. Aug 13 00:51:39.949002 systemd[1]: Starting systemd-networkd.service... Aug 13 00:51:39.950141 kernel: audit: type=1130 audit(1755046299.945:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:39.960919 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:51:40.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.025523 systemd[1]: Started systemd-userdbd.service. Aug 13 00:51:40.030158 kernel: audit: type=1130 audit(1755046300.025:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.055508 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:40.055791 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:40.057866 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:40.063347 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:40.066013 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:40.068537 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:51:40.068650 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:51:40.068789 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:40.069461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:40.069858 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:40.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.078443 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:40.078706 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:40.081063 kernel: audit: type=1130 audit(1755046300.073:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.081276 kernel: audit: type=1131 audit(1755046300.073:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.081332 kernel: audit: type=1130 audit(1755046300.080:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.080972 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:40.085883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:40.086123 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:40.086775 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:40.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.092347 kernel: audit: type=1131 audit(1755046300.080:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.100149 kernel: audit: type=1130 audit(1755046300.086:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.103885 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:51:40.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.109150 kernel: audit: type=1131 audit(1755046300.086:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.222823 systemd-networkd[1054]: lo: Link UP Aug 13 00:51:40.223459 systemd-networkd[1054]: lo: Gained carrier Aug 13 00:51:40.224542 systemd-networkd[1054]: Enumeration completed Aug 13 00:51:40.224795 systemd[1]: Started systemd-networkd.service. Aug 13 00:51:40.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.225589 systemd-networkd[1054]: eth1: Configuring with /run/systemd/network/10-a6:2b:07:6a:f1:f2.network. Aug 13 00:51:40.229273 kernel: audit: type=1130 audit(1755046300.225:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.230482 systemd-networkd[1054]: eth0: Configuring with /run/systemd/network/10-9a:0a:7d:21:d1:72.network. Aug 13 00:51:40.231589 systemd-networkd[1054]: eth1: Link UP Aug 13 00:51:40.231735 systemd-networkd[1054]: eth1: Gained carrier Aug 13 00:51:40.239887 systemd-networkd[1054]: eth0: Link UP Aug 13 00:51:40.239910 systemd-networkd[1054]: eth0: Gained carrier Aug 13 00:51:40.263239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:51:40.286127 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:51:40.291514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:51:40.310000 audit[1057]: AVC avc: denied { confidentiality } for pid=1057 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:51:40.310000 audit[1057]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d8ea505040 a1=338ac a2=7f053b666bc5 a3=5 items=110 ppid=1052 pid=1057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:40.310000 audit: CWD cwd="/" Aug 13 00:51:40.310000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=1 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=2 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=3 name=(null) inode=14487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=4 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=5 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=6 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=7 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=8 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=9 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=10 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=11 name=(null) inode=14491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=12 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=13 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=14 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=15 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=16 name=(null) inode=14489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=17 name=(null) inode=14494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=18 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=19 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=20 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=21 name=(null) inode=14496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=22 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=23 name=(null) inode=14497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=24 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=25 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=26 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=27 name=(null) inode=14499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=28 name=(null) inode=14495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=29 name=(null) inode=14500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=30 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=31 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=32 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=33 name=(null) inode=14502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=34 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=35 name=(null) inode=14503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=36 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=37 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=38 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=39 name=(null) inode=14505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=40 name=(null) inode=14501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=41 name=(null) inode=14506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=42 name=(null) inode=14486 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=43 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=44 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=45 name=(null) inode=14508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=46 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=47 name=(null) inode=14509 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=48 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=49 name=(null) inode=14510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=50 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=51 name=(null) inode=14511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=52 name=(null) inode=14507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=53 name=(null) inode=14512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=55 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=56 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=57 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=58 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=59 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=60 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=61 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=62 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=63 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=64 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=65 name=(null) inode=14518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=66 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=67 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=68 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=69 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=70 name=(null) inode=14516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=71 name=(null) inode=14521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=72 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=73 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=74 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=75 name=(null) inode=14523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=76 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=77 name=(null) inode=14524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=78 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=79 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=80 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=81 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=82 name=(null) inode=14522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=83 name=(null) inode=14527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=84 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=85 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=86 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=87 name=(null) inode=14529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=88 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=89 name=(null) inode=14530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=90 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=91 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=92 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=93 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=94 name=(null) inode=14528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=95 name=(null) inode=14533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=96 name=(null) inode=14513 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=97 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=98 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=99 name=(null) inode=14535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=100 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=101 name=(null) inode=14536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=102 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=103 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=104 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=105 name=(null) inode=14538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=106 name=(null) inode=14534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=107 name=(null) inode=14539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PATH item=109 name=(null) inode=14540 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:51:40.310000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:51:40.329130 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:51:40.336137 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:51:40.377131 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:51:40.534139 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:51:40.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.561234 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:51:40.564610 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:51:40.592948 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:40.623518 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:51:40.624378 systemd[1]: Reached target cryptsetup.target. Aug 13 00:51:40.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.627440 systemd[1]: Starting lvm2-activation.service... Aug 13 00:51:40.638617 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:51:40.675379 systemd[1]: Finished lvm2-activation.service. Aug 13 00:51:40.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.676041 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:51:40.678731 systemd[1]: Mounting media-configdrive.mount... Aug 13 00:51:40.679294 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:51:40.679385 systemd[1]: Reached target machines.target. Aug 13 00:51:40.682791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:51:40.702137 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:51:40.703899 systemd[1]: Mounted media-configdrive.mount. Aug 13 00:51:40.704397 systemd[1]: Reached target local-fs.target. Aug 13 00:51:40.706889 systemd[1]: Starting ldconfig.service... Aug 13 00:51:40.708231 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:40.708327 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:40.710635 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:51:40.715509 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:51:40.719667 systemd[1]: Starting systemd-sysext.service... Aug 13 00:51:40.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.725700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:51:40.732541 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Aug 13 00:51:40.734637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:51:40.758426 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:51:40.773677 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:51:40.774054 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:51:40.814375 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:51:40.818203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:51:40.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.820837 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:51:40.852242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:51:40.876141 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:51:40.893651 systemd-fsck[1117]: fsck.fat 4.2 (2021-01-31) Aug 13 00:51:40.893651 systemd-fsck[1117]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 00:51:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.898561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:51:40.901827 systemd[1]: Mounting boot.mount... Aug 13 00:51:40.911293 (sd-sysext)[1121]: Using extensions 'kubernetes'. Aug 13 00:51:40.919158 systemd[1]: Mounted boot.mount. Aug 13 00:51:40.923357 (sd-sysext)[1121]: Merged extensions into '/usr'. Aug 13 00:51:40.968010 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:51:40.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:40.971948 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:40.976395 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:51:40.977802 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:40.981740 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:40.992340 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:40.995786 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:40.997017 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:40.997432 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:40.997808 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:41.011906 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:51:41.014167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:41.014472 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:41.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.018800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:41.019081 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:41.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.021409 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:41.022030 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:41.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.023983 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:41.024497 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.026743 systemd[1]: Finished systemd-sysext.service. Aug 13 00:51:41.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.032897 systemd[1]: Starting ensure-sysext.service... Aug 13 00:51:41.051804 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:51:41.063249 systemd[1]: Reloading. Aug 13 00:51:41.102198 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:51:41.106291 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:51:41.114578 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:51:41.252474 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-08-13T00:51:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:51:41.252532 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-08-13T00:51:41Z" level=info msg="torcx already run" Aug 13 00:51:41.262687 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:51:41.423416 systemd-networkd[1054]: eth1: Gained IPv6LL Aug 13 00:51:41.467573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:51:41.467609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:51:41.504835 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:51:41.602041 systemd[1]: Finished ldconfig.service. Aug 13 00:51:41.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.604765 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:51:41.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.610307 systemd[1]: Starting audit-rules.service... Aug 13 00:51:41.613318 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:51:41.617032 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:51:41.626748 systemd[1]: Starting systemd-resolved.service... Aug 13 00:51:41.633734 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:51:41.637534 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:51:41.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.645002 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:51:41.661819 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.665872 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:41.670056 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:41.674518 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:41.675180 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.675431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:41.675670 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:41.677132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:41.677451 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:41.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.681000 audit[1221]: SYSTEM_BOOT pid=1221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.691160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.694598 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:51:41.696345 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.696811 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:41.697207 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:41.698873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:41.699245 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:41.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.714458 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:51:41.718411 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:41.718695 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:41.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.732508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:51:41.733377 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:51:41.735016 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:41.735550 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.739233 systemd[1]: Starting modprobe@drm.service... Aug 13 00:51:41.746952 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:51:41.753958 systemd[1]: Starting modprobe@loop.service... Aug 13 00:51:41.754739 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.755038 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:41.758222 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:51:41.759030 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:51:41.759403 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:51:41.771938 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:51:41.772308 systemd[1]: Finished modprobe@drm.service. Aug 13 00:51:41.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.773992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:51:41.774381 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:51:41.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.776060 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:51:41.780066 systemd[1]: Finished ensure-sysext.service. Aug 13 00:51:41.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.781360 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:51:41.781711 systemd[1]: Finished modprobe@loop.service. Aug 13 00:51:41.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.782615 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.810279 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:51:41.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.814035 systemd[1]: Starting systemd-update-done.service... Aug 13 00:51:41.837630 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:51:41.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.839868 systemd[1]: Finished systemd-update-done.service. Aug 13 00:51:41.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:41.870000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:51:41.870000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff89a74970 a2=420 a3=0 items=0 ppid=1215 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:41.870000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:51:41.870941 augenrules[1257]: No rules Aug 13 00:51:41.871806 systemd[1]: Finished audit-rules.service. Aug 13 00:51:41.899934 systemd-resolved[1218]: Positive Trust Anchors: Aug 13 00:51:41.900476 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:51:41.900580 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:51:41.907250 systemd-resolved[1218]: Using system hostname 'ci-3510.3.8-8-adc8b0fbd5'. Aug 13 00:51:41.910124 systemd[1]: Started systemd-resolved.service. Aug 13 00:51:41.910722 systemd[1]: Reached target network.target. Aug 13 00:51:41.911163 systemd[1]: Reached target network-online.target. Aug 13 00:51:41.911582 systemd[1]: Reached target nss-lookup.target. Aug 13 00:51:41.913579 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:51:41.914191 systemd[1]: Reached target sysinit.target. Aug 13 00:51:41.914746 systemd[1]: Started motdgen.path. Aug 13 00:51:41.915217 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:51:41.915718 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:51:41.916210 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:51:41.916257 systemd[1]: Reached target paths.target. Aug 13 00:51:41.916605 systemd[1]: Reached target time-set.target. Aug 13 00:51:41.917276 systemd[1]: Started logrotate.timer. Aug 13 00:51:41.917988 systemd[1]: Started mdadm.timer. Aug 13 00:51:41.918421 systemd[1]: Reached target timers.target. Aug 13 00:51:41.919385 systemd[1]: Listening on dbus.socket. Aug 13 00:51:41.922297 systemd[1]: Starting docker.socket... Aug 13 00:51:41.925531 systemd[1]: Listening on sshd.socket. Aug 13 00:51:41.926267 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:41.926994 systemd[1]: Listening on docker.socket. Aug 13 00:51:41.927592 systemd[1]: Reached target sockets.target. Aug 13 00:51:41.928040 systemd[1]: Reached target basic.target. Aug 13 00:51:41.928755 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:51:41.928854 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.928880 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:51:41.930461 systemd[1]: Starting containerd.service... Aug 13 00:51:41.933169 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 00:51:42.601766 systemd-timesyncd[1219]: Contacted time server 198.137.202.32:123 (0.flatcar.pool.ntp.org). Aug 13 00:51:42.601851 systemd-timesyncd[1219]: Initial clock synchronization to Wed 2025-08-13 00:51:42.601494 UTC. Aug 13 00:51:42.602632 systemd-resolved[1218]: Clock change detected. Flushing caches. Aug 13 00:51:42.604094 systemd[1]: Starting dbus.service... Aug 13 00:51:42.606352 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:51:42.614238 systemd[1]: Starting extend-filesystems.service... Aug 13 00:51:42.615328 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:51:42.617997 jq[1274]: false Aug 13 00:51:42.619538 systemd[1]: Starting kubelet.service... Aug 13 00:51:42.623581 systemd[1]: Starting motdgen.service... Aug 13 00:51:42.636421 systemd[1]: Starting prepare-helm.service... Aug 13 00:51:42.643384 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:51:42.649649 systemd[1]: Starting sshd-keygen.service... Aug 13 00:51:42.650013 extend-filesystems[1275]: Found loop1 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda1 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda2 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda3 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found usr Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda4 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda6 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda7 Aug 13 00:51:42.651252 extend-filesystems[1275]: Found vda9 Aug 13 00:51:42.651252 extend-filesystems[1275]: Checking size of /dev/vda9 Aug 13 00:51:42.653111 systemd[1]: Starting systemd-logind.service... Aug 13 00:51:42.653586 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:51:42.653683 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:51:42.659156 systemd[1]: Starting update-engine.service... Aug 13 00:51:42.662229 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:51:42.666663 jq[1295]: true Aug 13 00:51:42.668419 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:51:42.668847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:51:42.712219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:51:42.712561 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:51:42.723597 dbus-daemon[1273]: [system] SELinux support is enabled Aug 13 00:51:42.723892 systemd[1]: Started dbus.service. Aug 13 00:51:42.726832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:51:42.726882 systemd[1]: Reached target system-config.target. Aug 13 00:51:42.727405 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:51:42.727435 systemd[1]: Reached target user-config.target. Aug 13 00:51:42.738417 tar[1302]: linux-amd64/helm Aug 13 00:51:42.744654 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:51:42.744923 systemd[1]: Finished motdgen.service. Aug 13 00:51:42.755788 jq[1299]: true Aug 13 00:51:42.775038 extend-filesystems[1275]: Resized partition /dev/vda9 Aug 13 00:51:42.791764 extend-filesystems[1315]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:51:42.795179 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:51:42.793123 systemd-networkd[1054]: eth0: Gained IPv6LL Aug 13 00:51:42.853881 update_engine[1293]: I0813 00:51:42.853268 1293 main.cc:92] Flatcar Update Engine starting Aug 13 00:51:42.859009 systemd[1]: Started update-engine.service. Aug 13 00:51:42.861993 systemd[1]: Started locksmithd.service. Aug 13 00:51:42.864392 update_engine[1293]: I0813 00:51:42.864341 1293 update_check_scheduler.cc:74] Next update check in 7m54s Aug 13 00:51:42.927481 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:51:42.945780 extend-filesystems[1315]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:51:42.945780 extend-filesystems[1315]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:51:42.945780 extend-filesystems[1315]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:51:42.948428 extend-filesystems[1275]: Resized filesystem in /dev/vda9 Aug 13 00:51:42.948428 extend-filesystems[1275]: Found vdb Aug 13 00:51:42.947263 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:51:42.947582 systemd[1]: Finished extend-filesystems.service. Aug 13 00:51:42.964519 bash[1335]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:51:42.966076 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:51:43.046531 env[1301]: time="2025-08-13T00:51:43.046409056Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:51:43.048030 systemd-logind[1292]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:51:43.054616 systemd-logind[1292]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:51:43.058926 systemd-logind[1292]: New seat seat0. Aug 13 00:51:43.071470 systemd[1]: Started systemd-logind.service. Aug 13 00:51:43.101262 coreos-metadata[1269]: Aug 13 00:51:43.100 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:43.119538 coreos-metadata[1269]: Aug 13 00:51:43.119 INFO Fetch successful Aug 13 00:51:43.129324 unknown[1269]: wrote ssh authorized keys file for user: core Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.140711515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.140937063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.142871725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.142924312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.143321680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.143346332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.143363326Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.143374391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.143675 env[1301]: time="2025-08-13T00:51:43.143518063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.152220 env[1301]: time="2025-08-13T00:51:43.151986196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:51:43.152385 env[1301]: time="2025-08-13T00:51:43.152269320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:51:43.152385 env[1301]: time="2025-08-13T00:51:43.152294451Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:51:43.152480 env[1301]: time="2025-08-13T00:51:43.152407226Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:51:43.152480 env[1301]: time="2025-08-13T00:51:43.152421419Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155610771Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155664200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155679062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155741316Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155764728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155836782Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155853623Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155868205Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155882923Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155897684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155911943Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.155925497Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.156055199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:51:43.156757 env[1301]: time="2025-08-13T00:51:43.156139308Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156532921Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156563607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156580702Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156642892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156656979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156671244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156683908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156698722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156713878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156727550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156746676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156768041Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156919463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156936582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157237 env[1301]: time="2025-08-13T00:51:43.156951458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157648 env[1301]: time="2025-08-13T00:51:43.156963762Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:51:43.157648 env[1301]: time="2025-08-13T00:51:43.156981244Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:51:43.157648 env[1301]: time="2025-08-13T00:51:43.156995351Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:51:43.157648 env[1301]: time="2025-08-13T00:51:43.157018576Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:51:43.157648 env[1301]: time="2025-08-13T00:51:43.157062298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:51:43.157888 env[1301]: time="2025-08-13T00:51:43.157316920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:51:43.157888 env[1301]: time="2025-08-13T00:51:43.157382663Z" level=info msg="Connect containerd service" Aug 13 00:51:43.157888 env[1301]: time="2025-08-13T00:51:43.157428104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:51:43.160924 update-ssh-keys[1342]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:51:43.158646 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 00:51:43.173825 env[1301]: time="2025-08-13T00:51:43.172498421Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:51:43.174010 env[1301]: time="2025-08-13T00:51:43.173892246Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:51:43.174010 env[1301]: time="2025-08-13T00:51:43.173969420Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:51:43.174174 systemd[1]: Started containerd.service. Aug 13 00:51:43.175191 env[1301]: time="2025-08-13T00:51:43.175009446Z" level=info msg="containerd successfully booted in 0.162143s" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.184969170Z" level=info msg="Start subscribing containerd event" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.185103645Z" level=info msg="Start recovering state" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.185234880Z" level=info msg="Start event monitor" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.185256894Z" level=info msg="Start snapshots syncer" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.185269193Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:51:43.185280 env[1301]: time="2025-08-13T00:51:43.185277458Z" level=info msg="Start streaming server" Aug 13 00:51:43.673826 sshd_keygen[1310]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:51:43.765907 systemd[1]: Finished sshd-keygen.service. Aug 13 00:51:43.770740 systemd[1]: Starting issuegen.service... Aug 13 00:51:43.791985 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:51:43.792298 systemd[1]: Finished issuegen.service. Aug 13 00:51:43.795361 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:51:43.817154 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:51:43.819816 systemd[1]: Started getty@tty1.service. Aug 13 00:51:43.823976 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:51:43.825981 systemd[1]: Reached target getty.target. Aug 13 00:51:43.848204 tar[1302]: linux-amd64/LICENSE Aug 13 00:51:43.848807 tar[1302]: linux-amd64/README.md Aug 13 00:51:43.856780 systemd[1]: Finished prepare-helm.service. Aug 13 00:51:43.920213 locksmithd[1328]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:51:44.552031 systemd[1]: Started kubelet.service. Aug 13 00:51:44.558658 systemd[1]: Created slice system-sshd.slice. Aug 13 00:51:44.560007 systemd[1]: Reached target multi-user.target. Aug 13 00:51:44.564311 systemd[1]: Started sshd@0-137.184.32.218:22-139.178.68.195:39258.service. Aug 13 00:51:44.569759 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:51:44.591458 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:51:44.591805 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:51:44.592345 systemd[1]: Startup finished in 6.318s (kernel) + 8.808s (userspace) = 15.126s. Aug 13 00:51:44.675427 sshd[1379]: Accepted publickey for core from 139.178.68.195 port 39258 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:44.678597 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:44.697701 systemd[1]: Created slice user-500.slice. Aug 13 00:51:44.701127 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:51:44.716490 systemd-logind[1292]: New session 1 of user core. Aug 13 00:51:44.730485 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:51:44.732659 systemd[1]: Starting user@500.service... Aug 13 00:51:44.740401 (systemd)[1386]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:44.879933 systemd[1386]: Queued start job for default target default.target. Aug 13 00:51:44.881077 systemd[1386]: Reached target paths.target. Aug 13 00:51:44.881327 systemd[1386]: Reached target sockets.target. Aug 13 00:51:44.881502 systemd[1386]: Reached target timers.target. Aug 13 00:51:44.881649 systemd[1386]: Reached target basic.target. Aug 13 00:51:44.881874 systemd[1386]: Reached target default.target. Aug 13 00:51:44.882072 systemd[1386]: Startup finished in 125ms. Aug 13 00:51:44.882080 systemd[1]: Started user@500.service. Aug 13 00:51:44.883541 systemd[1]: Started session-1.scope. Aug 13 00:51:44.951291 systemd[1]: Started sshd@1-137.184.32.218:22-139.178.68.195:39266.service. Aug 13 00:51:45.041246 sshd[1400]: Accepted publickey for core from 139.178.68.195 port 39266 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.042294 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.049042 systemd[1]: Started session-2.scope. Aug 13 00:51:45.050155 systemd-logind[1292]: New session 2 of user core. Aug 13 00:51:45.120198 sshd[1400]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.125062 systemd[1]: Started sshd@2-137.184.32.218:22-139.178.68.195:39268.service. Aug 13 00:51:45.134314 systemd[1]: sshd@1-137.184.32.218:22-139.178.68.195:39266.service: Deactivated successfully. Aug 13 00:51:45.135364 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:51:45.136245 systemd-logind[1292]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:51:45.138277 systemd-logind[1292]: Removed session 2. Aug 13 00:51:45.198130 sshd[1405]: Accepted publickey for core from 139.178.68.195 port 39268 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.199515 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.207882 systemd-logind[1292]: New session 3 of user core. Aug 13 00:51:45.208643 systemd[1]: Started session-3.scope. Aug 13 00:51:45.281625 sshd[1405]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.288894 systemd[1]: Started sshd@3-137.184.32.218:22-139.178.68.195:39276.service. Aug 13 00:51:45.293068 systemd[1]: sshd@2-137.184.32.218:22-139.178.68.195:39268.service: Deactivated successfully. Aug 13 00:51:45.294432 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:51:45.294603 systemd-logind[1292]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:51:45.300577 systemd-logind[1292]: Removed session 3. Aug 13 00:51:45.351168 sshd[1412]: Accepted publickey for core from 139.178.68.195 port 39276 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.354587 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.362270 systemd-logind[1292]: New session 4 of user core. Aug 13 00:51:45.363947 systemd[1]: Started session-4.scope. Aug 13 00:51:45.441332 sshd[1412]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.446282 systemd[1]: Started sshd@4-137.184.32.218:22-139.178.68.195:39292.service. Aug 13 00:51:45.452441 systemd[1]: sshd@3-137.184.32.218:22-139.178.68.195:39276.service: Deactivated successfully. Aug 13 00:51:45.453585 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:51:45.455985 systemd-logind[1292]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:51:45.461378 systemd-logind[1292]: Removed session 4. Aug 13 00:51:45.507674 sshd[1419]: Accepted publickey for core from 139.178.68.195 port 39292 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.510881 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.519486 systemd[1]: Started session-5.scope. Aug 13 00:51:45.520574 systemd-logind[1292]: New session 5 of user core. Aug 13 00:51:45.522692 kubelet[1377]: E0813 00:51:45.522610 1377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:51:45.526003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:51:45.526216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:51:45.603134 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:51:45.603555 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:51:45.614752 dbus-daemon[1273]: \xd0}\x8b]/V: received setenforce notice (enforcing=1814134736) Aug 13 00:51:45.616965 sudo[1426]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:45.622951 sshd[1419]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.632269 systemd[1]: Started sshd@5-137.184.32.218:22-139.178.68.195:39296.service. Aug 13 00:51:45.640521 systemd-logind[1292]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:51:45.643662 systemd[1]: sshd@4-137.184.32.218:22-139.178.68.195:39292.service: Deactivated successfully. Aug 13 00:51:45.644639 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:51:45.647192 systemd-logind[1292]: Removed session 5. Aug 13 00:51:45.700132 sshd[1428]: Accepted publickey for core from 139.178.68.195 port 39296 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.702477 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.708650 systemd[1]: Started session-6.scope. Aug 13 00:51:45.709137 systemd-logind[1292]: New session 6 of user core. Aug 13 00:51:45.772365 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:51:45.773403 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:51:45.778278 sudo[1435]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:45.786118 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:51:45.786420 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:51:45.799227 systemd[1]: Stopping audit-rules.service... Aug 13 00:51:45.799000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:51:45.801532 auditctl[1438]: No rules Aug 13 00:51:45.801965 kernel: kauditd_printk_skb: 154 callbacks suppressed Aug 13 00:51:45.802047 kernel: audit: type=1305 audit(1755046305.799:167): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 00:51:45.799000 audit[1438]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce8ee6090 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:45.802405 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:51:45.802731 systemd[1]: Stopped audit-rules.service. Aug 13 00:51:45.805041 systemd[1]: Starting audit-rules.service... Aug 13 00:51:45.810701 kernel: audit: type=1300 audit(1755046305.799:167): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce8ee6090 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:45.810867 kernel: audit: type=1327 audit(1755046305.799:167): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:51:45.810905 kernel: audit: type=1131 audit(1755046305.801:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.799000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 00:51:45.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.833714 augenrules[1456]: No rules Aug 13 00:51:45.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.834815 systemd[1]: Finished audit-rules.service. Aug 13 00:51:45.838463 kernel: audit: type=1130 audit(1755046305.833:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.837929 sudo[1434]: pam_unix(sudo:session): session closed for user root Aug 13 00:51:45.836000 audit[1434]: USER_END pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.842556 kernel: audit: type=1106 audit(1755046305.836:170): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.847719 kernel: audit: type=1104 audit(1755046305.836:171): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.836000 audit[1434]: CRED_DISP pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.842056 sshd[1428]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.846062 systemd[1]: Started sshd@6-137.184.32.218:22-139.178.68.195:39302.service. Aug 13 00:51:45.846846 systemd[1]: sshd@5-137.184.32.218:22-139.178.68.195:39296.service: Deactivated successfully. Aug 13 00:51:45.855978 kernel: audit: type=1106 audit(1755046305.841:172): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.841000 audit[1428]: USER_END pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.853821 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:51:45.853990 systemd-logind[1292]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:51:45.860095 systemd-logind[1292]: Removed session 6. Aug 13 00:51:45.841000 audit[1428]: CRED_DISP pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.32.218:22-139.178.68.195:39302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.866517 kernel: audit: type=1104 audit(1755046305.841:173): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.866655 kernel: audit: type=1130 audit(1755046305.844:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.32.218:22-139.178.68.195:39302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-137.184.32.218:22-139.178.68.195:39296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.906000 audit[1462]: USER_ACCT pid=1462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.908033 sshd[1462]: Accepted publickey for core from 139.178.68.195 port 39302 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:51:45.907000 audit[1462]: CRED_ACQ pid=1462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.908000 audit[1462]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0db08da0 a2=3 a3=0 items=0 ppid=1 pid=1462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:45.908000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:51:45.910309 sshd[1462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:51:45.915032 systemd-logind[1292]: New session 7 of user core. Aug 13 00:51:45.916603 systemd[1]: Started session-7.scope. Aug 13 00:51:45.924000 audit[1462]: USER_START pid=1462 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.926000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:51:45.982000 audit[1467]: USER_ACCT pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.985166 sudo[1467]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:51:45.983000 audit[1467]: CRED_REFR pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:45.985550 sudo[1467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:51:45.987000 audit[1467]: USER_START pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.031664 systemd[1]: Starting docker.service... Aug 13 00:51:46.085789 env[1477]: time="2025-08-13T00:51:46.085670444Z" level=info msg="Starting up" Aug 13 00:51:46.090432 env[1477]: time="2025-08-13T00:51:46.090385481Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:51:46.090432 env[1477]: time="2025-08-13T00:51:46.090420189Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:51:46.090432 env[1477]: time="2025-08-13T00:51:46.090456121Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:51:46.090432 env[1477]: time="2025-08-13T00:51:46.090468675Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:51:46.093020 env[1477]: time="2025-08-13T00:51:46.092964608Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:51:46.093020 env[1477]: time="2025-08-13T00:51:46.092998449Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:51:46.093020 env[1477]: time="2025-08-13T00:51:46.093022984Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:51:46.093255 env[1477]: time="2025-08-13T00:51:46.093035276Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:51:46.104992 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3739348164-merged.mount: Deactivated successfully. Aug 13 00:51:46.128800 env[1477]: time="2025-08-13T00:51:46.128726202Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:51:46.128800 env[1477]: time="2025-08-13T00:51:46.128769919Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:51:46.129188 env[1477]: time="2025-08-13T00:51:46.129133766Z" level=info msg="Loading containers: start." Aug 13 00:51:46.223000 audit[1508]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.223000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd0e4c2860 a2=0 a3=7ffd0e4c284c items=0 ppid=1477 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.223000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 00:51:46.225000 audit[1510]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.225000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe6c82dc50 a2=0 a3=7ffe6c82dc3c items=0 ppid=1477 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.225000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 00:51:46.228000 audit[1512]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.228000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea89cb980 a2=0 a3=7ffea89cb96c items=0 ppid=1477 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.228000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:51:46.231000 audit[1514]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.231000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb5e82ec0 a2=0 a3=7fffb5e82eac items=0 ppid=1477 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.231000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:51:46.234000 audit[1516]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.234000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef89980d0 a2=0 a3=7ffef89980bc items=0 ppid=1477 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 00:51:46.253000 audit[1521]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.253000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc23077e90 a2=0 a3=7ffc23077e7c items=0 ppid=1477 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.253000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 00:51:46.260000 audit[1523]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.260000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc808e83b0 a2=0 a3=7ffc808e839c items=0 ppid=1477 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 00:51:46.262000 audit[1525]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.262000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd51b1cfe0 a2=0 a3=7ffd51b1cfcc items=0 ppid=1477 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 00:51:46.265000 audit[1527]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.265000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd0d68ad40 a2=0 a3=7ffd0d68ad2c items=0 ppid=1477 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:51:46.274000 audit[1531]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.274000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcfdeb9940 a2=0 a3=7ffcfdeb992c items=0 ppid=1477 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:51:46.280000 audit[1532]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.280000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc2ad561f0 a2=0 a3=7ffc2ad561dc items=0 ppid=1477 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:51:46.296467 kernel: Initializing XFRM netlink socket Aug 13 00:51:46.340306 env[1477]: time="2025-08-13T00:51:46.340259071Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:51:46.384000 audit[1540]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.384000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd0e49f930 a2=0 a3=7ffd0e49f91c items=0 ppid=1477 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.384000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 00:51:46.399000 audit[1543]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.399000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe3408ed70 a2=0 a3=7ffe3408ed5c items=0 ppid=1477 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 00:51:46.404000 audit[1546]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.404000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff3f9e5450 a2=0 a3=7fff3f9e543c items=0 ppid=1477 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 00:51:46.408000 audit[1548]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.408000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdc0eb6070 a2=0 a3=7ffdc0eb605c items=0 ppid=1477 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 00:51:46.413000 audit[1550]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.413000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffddc0b6e10 a2=0 a3=7ffddc0b6dfc items=0 ppid=1477 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.413000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 00:51:46.417000 audit[1552]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.417000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff430e7480 a2=0 a3=7fff430e746c items=0 ppid=1477 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.417000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 00:51:46.422000 audit[1554]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.422000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffde5bea170 a2=0 a3=7ffde5bea15c items=0 ppid=1477 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 00:51:46.437000 audit[1557]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.437000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffef7c26d60 a2=0 a3=7ffef7c26d4c items=0 ppid=1477 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 00:51:46.443000 audit[1559]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.443000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffccd786790 a2=0 a3=7ffccd78677c items=0 ppid=1477 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 00:51:46.447000 audit[1561]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.447000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdbfc01c30 a2=0 a3=7ffdbfc01c1c items=0 ppid=1477 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.447000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 00:51:46.453000 audit[1563]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.453000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd6c768ad0 a2=0 a3=7ffd6c768abc items=0 ppid=1477 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.453000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 00:51:46.454776 systemd-networkd[1054]: docker0: Link UP Aug 13 00:51:46.467000 audit[1567]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.467000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc691df260 a2=0 a3=7ffc691df24c items=0 ppid=1477 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.467000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:51:46.474000 audit[1568]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:51:46.474000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc0ef065b0 a2=0 a3=7ffc0ef0659c items=0 ppid=1477 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:51:46.474000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 00:51:46.477498 env[1477]: time="2025-08-13T00:51:46.477360054Z" level=info msg="Loading containers: done." Aug 13 00:51:46.502499 env[1477]: time="2025-08-13T00:51:46.502294432Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:51:46.504680 env[1477]: time="2025-08-13T00:51:46.504617483Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:51:46.504870 env[1477]: time="2025-08-13T00:51:46.504823799Z" level=info msg="Daemon has completed initialization" Aug 13 00:51:46.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:46.527092 systemd[1]: Started docker.service. Aug 13 00:51:46.539540 env[1477]: time="2025-08-13T00:51:46.539433102Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:51:46.572707 systemd[1]: Starting coreos-metadata.service... Aug 13 00:51:46.643895 coreos-metadata[1593]: Aug 13 00:51:46.643 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:51:46.656946 coreos-metadata[1593]: Aug 13 00:51:46.656 INFO Fetch successful Aug 13 00:51:46.679522 systemd[1]: Finished coreos-metadata.service. Aug 13 00:51:46.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:47.702050 env[1301]: time="2025-08-13T00:51:47.701971397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:51:48.222858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1427160016.mount: Deactivated successfully. Aug 13 00:51:49.730824 env[1301]: time="2025-08-13T00:51:49.730742702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:49.732875 env[1301]: time="2025-08-13T00:51:49.732772835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:49.735903 env[1301]: time="2025-08-13T00:51:49.735834761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:49.740117 env[1301]: time="2025-08-13T00:51:49.740041067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:49.741753 env[1301]: time="2025-08-13T00:51:49.741639767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:51:49.742961 env[1301]: time="2025-08-13T00:51:49.742895680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:51:51.717355 env[1301]: time="2025-08-13T00:51:51.717270167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:51.719544 env[1301]: time="2025-08-13T00:51:51.719491088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:51.722281 env[1301]: time="2025-08-13T00:51:51.722218389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:51.724482 env[1301]: time="2025-08-13T00:51:51.724413251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:51.726130 env[1301]: time="2025-08-13T00:51:51.726051714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:51:51.727745 env[1301]: time="2025-08-13T00:51:51.727700947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:51:53.544456 env[1301]: time="2025-08-13T00:51:53.544252626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:53.546872 env[1301]: time="2025-08-13T00:51:53.546798095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:53.549893 env[1301]: time="2025-08-13T00:51:53.549830570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:53.552747 env[1301]: time="2025-08-13T00:51:53.552683763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:53.553800 env[1301]: time="2025-08-13T00:51:53.553722806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:51:53.556022 env[1301]: time="2025-08-13T00:51:53.555962031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:51:54.757822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839388288.mount: Deactivated successfully. Aug 13 00:51:55.549814 env[1301]: time="2025-08-13T00:51:55.549747405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:55.551779 env[1301]: time="2025-08-13T00:51:55.551730798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:55.553477 env[1301]: time="2025-08-13T00:51:55.553410813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:55.554960 env[1301]: time="2025-08-13T00:51:55.554917188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:55.555965 env[1301]: time="2025-08-13T00:51:55.555923948Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:51:55.556889 env[1301]: time="2025-08-13T00:51:55.556857676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:51:55.777532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:51:55.777934 systemd[1]: Stopped kubelet.service. Aug 13 00:51:55.786561 kernel: kauditd_printk_skb: 85 callbacks suppressed Aug 13 00:51:55.786773 kernel: audit: type=1130 audit(1755046315.776:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.786835 kernel: audit: type=1131 audit(1755046315.776:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:55.782236 systemd[1]: Starting kubelet.service... Aug 13 00:51:56.044631 kernel: audit: type=1130 audit(1755046316.039:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:51:56.041020 systemd[1]: Started kubelet.service. Aug 13 00:51:56.087269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880584159.mount: Deactivated successfully. Aug 13 00:51:56.168083 kubelet[1624]: E0813 00:51:56.168003 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:51:56.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:51:56.171963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:51:56.172149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:51:56.176596 kernel: audit: type=1131 audit(1755046316.171:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:51:57.550100 env[1301]: time="2025-08-13T00:51:57.550010054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:57.552190 env[1301]: time="2025-08-13T00:51:57.552141801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:57.554936 env[1301]: time="2025-08-13T00:51:57.554883991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:57.557245 env[1301]: time="2025-08-13T00:51:57.557194661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:57.559602 env[1301]: time="2025-08-13T00:51:57.559541595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:51:57.563692 env[1301]: time="2025-08-13T00:51:57.563643408Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:51:58.043353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274077491.mount: Deactivated successfully. Aug 13 00:51:58.048403 env[1301]: time="2025-08-13T00:51:58.048358404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:58.050466 env[1301]: time="2025-08-13T00:51:58.050416937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:58.052266 env[1301]: time="2025-08-13T00:51:58.052226210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:58.054236 env[1301]: time="2025-08-13T00:51:58.054194680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:51:58.055028 env[1301]: time="2025-08-13T00:51:58.054986994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:51:58.055761 env[1301]: time="2025-08-13T00:51:58.055727299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:51:58.647703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913558187.mount: Deactivated successfully. Aug 13 00:52:01.672767 env[1301]: time="2025-08-13T00:52:01.672659018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.677468 env[1301]: time="2025-08-13T00:52:01.677377603Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.683402 env[1301]: time="2025-08-13T00:52:01.683336590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.688772 env[1301]: time="2025-08-13T00:52:01.688700646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:01.690419 env[1301]: time="2025-08-13T00:52:01.690321483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:52:06.245299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:52:06.253524 kernel: audit: type=1130 audit(1755046326.246:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:06.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:06.246697 systemd[1]: Stopped kubelet.service. Aug 13 00:52:06.250685 systemd[1]: Starting kubelet.service... Aug 13 00:52:06.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:06.260492 kernel: audit: type=1131 audit(1755046326.246:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:06.285736 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:52:06.285927 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:52:06.287101 systemd[1]: Stopped kubelet.service. Aug 13 00:52:06.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:52:06.292479 kernel: audit: type=1130 audit(1755046326.285:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:52:06.296566 systemd[1]: Starting kubelet.service... Aug 13 00:52:06.358989 systemd[1]: Reloading. Aug 13 00:52:06.549778 /usr/lib/systemd/system-generators/torcx-generator[1678]: time="2025-08-13T00:52:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:52:06.550727 /usr/lib/systemd/system-generators/torcx-generator[1678]: time="2025-08-13T00:52:06Z" level=info msg="torcx already run" Aug 13 00:52:06.701260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:06.701290 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:06.729843 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:06.858223 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:52:06.858374 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:52:06.859022 systemd[1]: Stopped kubelet.service. Aug 13 00:52:06.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:52:06.863480 kernel: audit: type=1130 audit(1755046326.857:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 00:52:06.867781 systemd[1]: Starting kubelet.service... Aug 13 00:52:07.037986 systemd[1]: Started kubelet.service. Aug 13 00:52:07.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.044329 kernel: audit: type=1130 audit(1755046327.039:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:07.110407 kubelet[1744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:52:07.110407 kubelet[1744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:52:07.110407 kubelet[1744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:52:07.110407 kubelet[1744]: I0813 00:52:07.109994 1744 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:52:07.645243 kubelet[1744]: I0813 00:52:07.645114 1744 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:52:07.645243 kubelet[1744]: I0813 00:52:07.645172 1744 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:52:07.645768 kubelet[1744]: I0813 00:52:07.645728 1744 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:52:07.674837 kubelet[1744]: E0813 00:52:07.674792 1744 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.32.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:07.680514 kubelet[1744]: I0813 00:52:07.680409 1744 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:52:07.695441 kubelet[1744]: E0813 00:52:07.695364 1744 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:52:07.695768 kubelet[1744]: I0813 00:52:07.695742 1744 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:52:07.704649 kubelet[1744]: I0813 00:52:07.704603 1744 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:52:07.705675 kubelet[1744]: I0813 00:52:07.705640 1744 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:52:07.706212 kubelet[1744]: I0813 00:52:07.706147 1744 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:52:07.706873 kubelet[1744]: I0813 00:52:07.706333 1744 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-8-adc8b0fbd5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:52:07.707373 kubelet[1744]: I0813 00:52:07.707292 1744 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:52:07.707563 kubelet[1744]: I0813 00:52:07.707544 1744 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:52:07.707929 kubelet[1744]: I0813 00:52:07.707909 1744 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:52:07.718695 kubelet[1744]: I0813 00:52:07.718607 1744 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:52:07.718997 kubelet[1744]: I0813 00:52:07.718975 1744 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:52:07.719223 kubelet[1744]: I0813 00:52:07.719177 1744 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:52:07.719378 kubelet[1744]: I0813 00:52:07.719358 1744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:52:07.729888 kubelet[1744]: I0813 00:52:07.729846 1744 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:52:07.730500 kubelet[1744]: I0813 00:52:07.730473 1744 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:52:07.731424 kubelet[1744]: W0813 00:52:07.731387 1744 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:52:07.734032 kubelet[1744]: I0813 00:52:07.733980 1744 server.go:1274] "Started kubelet" Aug 13 00:52:07.735398 kubelet[1744]: W0813 00:52:07.734200 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.32.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-8-adc8b0fbd5&limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:07.735398 kubelet[1744]: E0813 00:52:07.734290 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.32.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-8-adc8b0fbd5&limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:07.745006 kubelet[1744]: W0813 00:52:07.744918 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.32.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:07.745314 kubelet[1744]: E0813 00:52:07.745286 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.32.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:07.748231 kubelet[1744]: I0813 00:52:07.748150 1744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:52:07.748487 kubelet[1744]: I0813 00:52:07.748182 1744 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:52:07.748818 kubelet[1744]: I0813 00:52:07.748790 1744 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:52:07.750000 audit[1744]: AVC avc: denied { mac_admin } for pid=1744 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:07.752642 kubelet[1744]: I0813 00:52:07.752597 1744 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:52:07.752835 kubelet[1744]: I0813 00:52:07.752811 1744 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:52:07.753069 kubelet[1744]: I0813 00:52:07.753046 1744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:52:07.750000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:07.757649 kernel: audit: type=1400 audit(1755046327.750:219): avc: denied { mac_admin } for pid=1744 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:07.757814 kernel: audit: type=1401 audit(1755046327.750:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:07.757859 kernel: audit: type=1300 audit(1755046327.750:219): arch=c000003e syscall=188 success=no exit=-22 a0=c0009e0540 a1=c0008e1e78 a2=c0009e0510 a3=25 items=0 ppid=1 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.750000 audit[1744]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009e0540 a1=c0008e1e78 a2=c0009e0510 a3=25 items=0 ppid=1 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.765206 kubelet[1744]: I0813 00:52:07.765147 1744 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:52:07.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:07.771819 kernel: audit: type=1327 audit(1755046327.750:219): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:07.772014 kernel: audit: type=1400 audit(1755046327.751:220): avc: denied { mac_admin } for pid=1744 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:07.751000 audit[1744]: AVC avc: denied { mac_admin } for pid=1744 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:07.772583 kubelet[1744]: E0813 00:52:07.766865 1744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.32.218:6443/api/v1/namespaces/default/events\": dial tcp 137.184.32.218:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-8-adc8b0fbd5.185b2d5620ef4f6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-8-adc8b0fbd5,UID:ci-3510.3.8-8-adc8b0fbd5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-8-adc8b0fbd5,},FirstTimestamp:2025-08-13 00:52:07.733931887 +0000 UTC m=+0.679641633,LastTimestamp:2025-08-13 00:52:07.733931887 +0000 UTC m=+0.679641633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-8-adc8b0fbd5,}" Aug 13 00:52:07.775935 kubelet[1744]: I0813 00:52:07.775598 1744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:52:07.751000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:07.751000 audit[1744]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008e5ca0 a1=c0008e1e90 a2=c0009e05d0 a3=25 items=0 ppid=1 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:07.758000 audit[1755]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.758000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdaa2b1cf0 a2=0 a3=7ffdaa2b1cdc items=0 ppid=1744 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:52:07.760000 audit[1756]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.760000 audit[1756]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde0580cd0 a2=0 a3=7ffde0580cbc items=0 ppid=1744 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:52:07.777982 kubelet[1744]: I0813 00:52:07.777938 1744 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:52:07.778391 kubelet[1744]: E0813 00:52:07.778360 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-8-adc8b0fbd5\" not found" Aug 13 00:52:07.780908 kubelet[1744]: E0813 00:52:07.780848 1744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.32.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-8-adc8b0fbd5?timeout=10s\": dial tcp 137.184.32.218:6443: connect: connection refused" interval="200ms" Aug 13 00:52:07.779000 audit[1758]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.779000 audit[1758]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe77be2a00 a2=0 a3=7ffe77be29ec items=0 ppid=1744 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:52:07.781935 kubelet[1744]: I0813 00:52:07.781875 1744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:52:07.784627 kubelet[1744]: I0813 00:52:07.784601 1744 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:52:07.784818 kubelet[1744]: I0813 00:52:07.784802 1744 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:52:07.785162 kubelet[1744]: I0813 00:52:07.785138 1744 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:52:07.785233 kubelet[1744]: I0813 00:52:07.785212 1744 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:52:07.786000 audit[1760]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.786000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6c5ac350 a2=0 a3=7fff6c5ac33c items=0 ppid=1744 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:52:07.806000 audit[1763]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.806000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffeafadf110 a2=0 a3=7ffeafadf0fc items=0 ppid=1744 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 00:52:07.808041 kubelet[1744]: I0813 00:52:07.807955 1744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:52:07.809000 audit[1764]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:07.809000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff34e19300 a2=0 a3=7fff34e192ec items=0 ppid=1744 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 00:52:07.812973 kubelet[1744]: W0813 00:52:07.812877 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.32.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:07.814000 audit[1766]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.814000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedc5d5bf0 a2=0 a3=7ffedc5d5bdc items=0 ppid=1744 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:52:07.829203 kubelet[1744]: E0813 00:52:07.829141 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.32.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:07.831000 audit[1768]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.831000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccfa26a60 a2=0 a3=7ffccfa26a4c items=0 ppid=1744 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:52:07.834432 kubelet[1744]: I0813 00:52:07.834376 1744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:52:07.834650 kubelet[1744]: I0813 00:52:07.834632 1744 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:52:07.834836 kubelet[1744]: I0813 00:52:07.834810 1744 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:52:07.835040 kubelet[1744]: E0813 00:52:07.835011 1744 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:52:07.833000 audit[1769]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:07.833000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2ede83e0 a2=0 a3=7ffd2ede83cc items=0 ppid=1744 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:52:07.837000 audit[1771]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:07.837000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd94a8ff70 a2=0 a3=7ffd94a8ff5c items=0 ppid=1744 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 00:52:07.840966 kubelet[1744]: W0813 00:52:07.840902 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.32.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:07.841145 kubelet[1744]: E0813 00:52:07.840973 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.32.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:07.841000 audit[1773]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:07.841000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffde3b452a0 a2=0 a3=7ffde3b4528c items=0 ppid=1744 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 00:52:07.842000 audit[1774]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:07.842000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff51a260a0 a2=0 a3=7fff51a2608c items=0 ppid=1744 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 00:52:07.847126 kubelet[1744]: I0813 00:52:07.847084 1744 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:52:07.847126 kubelet[1744]: I0813 00:52:07.847106 1744 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:52:07.847126 kubelet[1744]: I0813 00:52:07.847133 1744 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:52:07.849345 kubelet[1744]: I0813 00:52:07.849310 1744 policy_none.go:49] "None policy: Start" Aug 13 00:52:07.850375 kubelet[1744]: I0813 00:52:07.850343 1744 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:52:07.850529 kubelet[1744]: I0813 00:52:07.850384 1744 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:52:07.856144 kubelet[1744]: I0813 00:52:07.856099 1744 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:52:07.854000 audit[1744]: AVC avc: denied { mac_admin } for pid=1744 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:07.854000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:07.854000 audit[1744]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a91140 a1=c0006a5b18 a2=c000a91110 a3=25 items=0 ppid=1 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:07.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:07.856549 kubelet[1744]: I0813 00:52:07.856202 1744 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:52:07.856549 kubelet[1744]: I0813 00:52:07.856371 1744 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:52:07.856549 kubelet[1744]: I0813 00:52:07.856392 1744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:52:07.860541 kubelet[1744]: I0813 00:52:07.860503 1744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:52:07.864543 kubelet[1744]: E0813 00:52:07.864464 1744 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-8-adc8b0fbd5\" not found" Aug 13 00:52:07.962981 kubelet[1744]: I0813 00:52:07.958686 1744 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:07.963385 kubelet[1744]: E0813 00:52:07.963328 1744 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.32.218:6443/api/v1/nodes\": dial tcp 137.184.32.218:6443: connect: connection refused" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:07.982137 kubelet[1744]: E0813 00:52:07.982070 1744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.32.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-8-adc8b0fbd5?timeout=10s\": dial tcp 137.184.32.218:6443: connect: connection refused" interval="400ms" Aug 13 00:52:08.085735 kubelet[1744]: I0813 00:52:08.085669 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12e703002e9ba6d5c7e851a907eaab56-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"12e703002e9ba6d5c7e851a907eaab56\") " pod="kube-system/kube-scheduler-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086059 kubelet[1744]: I0813 00:52:08.086024 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086306 kubelet[1744]: I0813 00:52:08.086277 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086495 kubelet[1744]: I0813 00:52:08.086472 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086619 kubelet[1744]: I0813 00:52:08.086598 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086826 kubelet[1744]: I0813 00:52:08.086802 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.086961 kubelet[1744]: I0813 00:52:08.086939 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.087107 kubelet[1744]: I0813 00:52:08.087087 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.087221 kubelet[1744]: I0813 00:52:08.087200 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.165089 kubelet[1744]: I0813 00:52:08.165023 1744 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.166125 kubelet[1744]: E0813 00:52:08.166086 1744 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.32.218:6443/api/v1/nodes\": dial tcp 137.184.32.218:6443: connect: connection refused" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.248174 kubelet[1744]: E0813 00:52:08.248001 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:08.248916 kubelet[1744]: E0813 00:52:08.248883 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:08.250278 env[1301]: time="2025-08-13T00:52:08.249718624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-8-adc8b0fbd5,Uid:12e703002e9ba6d5c7e851a907eaab56,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:08.250961 kubelet[1744]: E0813 00:52:08.249047 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:08.251870 env[1301]: time="2025-08-13T00:52:08.251523282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-8-adc8b0fbd5,Uid:d9d1b7258c94fbf00d829063b4e64567,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:08.251870 env[1301]: time="2025-08-13T00:52:08.251651410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5,Uid:b09fe779cd58db34688ec22c809a2ac7,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:08.383116 kubelet[1744]: E0813 00:52:08.382997 1744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.32.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-8-adc8b0fbd5?timeout=10s\": dial tcp 137.184.32.218:6443: connect: connection refused" interval="800ms" Aug 13 00:52:08.568487 kubelet[1744]: I0813 00:52:08.568198 1744 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.569021 kubelet[1744]: E0813 00:52:08.568973 1744 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.32.218:6443/api/v1/nodes\": dial tcp 137.184.32.218:6443: connect: connection refused" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:08.733352 env[1301]: time="2025-08-13T00:52:08.733300546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.733454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7205708.mount: Deactivated successfully. Aug 13 00:52:08.738533 env[1301]: time="2025-08-13T00:52:08.738368087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.741711 env[1301]: time="2025-08-13T00:52:08.741658007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.743040 env[1301]: time="2025-08-13T00:52:08.742998477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.744111 env[1301]: time="2025-08-13T00:52:08.744032228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.745127 env[1301]: time="2025-08-13T00:52:08.745083971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.748189 env[1301]: time="2025-08-13T00:52:08.748135297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.752227 env[1301]: time="2025-08-13T00:52:08.752178378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.754489 env[1301]: time="2025-08-13T00:52:08.754415179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.755689 env[1301]: time="2025-08-13T00:52:08.755626838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.756524 env[1301]: time="2025-08-13T00:52:08.756485598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.761511 env[1301]: time="2025-08-13T00:52:08.761404639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:08.784160 env[1301]: time="2025-08-13T00:52:08.784044079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:08.788781 env[1301]: time="2025-08-13T00:52:08.784492629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:08.788781 env[1301]: time="2025-08-13T00:52:08.784550850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:08.788781 env[1301]: time="2025-08-13T00:52:08.784824013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5593c869e1f6079353ba838db1e3eaee9cae503d68ac614dd823d975513f1cf9 pid=1783 runtime=io.containerd.runc.v2 Aug 13 00:52:08.841835 env[1301]: time="2025-08-13T00:52:08.819860190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:08.841835 env[1301]: time="2025-08-13T00:52:08.820047687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:08.841835 env[1301]: time="2025-08-13T00:52:08.820066780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:08.843226 env[1301]: time="2025-08-13T00:52:08.832370163Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/663b1f7541ba638a1307341fefd2136d1d2ae89bdb3289776d5c4655d9eca3e4 pid=1804 runtime=io.containerd.runc.v2 Aug 13 00:52:08.854954 env[1301]: time="2025-08-13T00:52:08.854817541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:08.855197 env[1301]: time="2025-08-13T00:52:08.854960678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:08.855197 env[1301]: time="2025-08-13T00:52:08.855021165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:08.855362 env[1301]: time="2025-08-13T00:52:08.855295813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8de53d7cbee358c480ea2d409152361b03d6021987f085d7ea12008b02aef36 pid=1826 runtime=io.containerd.runc.v2 Aug 13 00:52:08.951758 env[1301]: time="2025-08-13T00:52:08.951599553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-8-adc8b0fbd5,Uid:d9d1b7258c94fbf00d829063b4e64567,Namespace:kube-system,Attempt:0,} returns sandbox id \"5593c869e1f6079353ba838db1e3eaee9cae503d68ac614dd823d975513f1cf9\"" Aug 13 00:52:08.955221 kubelet[1744]: E0813 00:52:08.954830 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:08.960511 env[1301]: time="2025-08-13T00:52:08.958910859Z" level=info msg="CreateContainer within sandbox \"5593c869e1f6079353ba838db1e3eaee9cae503d68ac614dd823d975513f1cf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:52:08.983480 env[1301]: time="2025-08-13T00:52:08.982281686Z" level=info msg="CreateContainer within sandbox \"5593c869e1f6079353ba838db1e3eaee9cae503d68ac614dd823d975513f1cf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4773da1ee5e5f840d5584c0e60f094c21d4027f092662939de821200bba18b17\"" Aug 13 00:52:08.993767 env[1301]: time="2025-08-13T00:52:08.993657846Z" level=info msg="StartContainer for \"4773da1ee5e5f840d5584c0e60f094c21d4027f092662939de821200bba18b17\"" Aug 13 00:52:08.997616 env[1301]: time="2025-08-13T00:52:08.996195086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-8-adc8b0fbd5,Uid:12e703002e9ba6d5c7e851a907eaab56,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8de53d7cbee358c480ea2d409152361b03d6021987f085d7ea12008b02aef36\"" Aug 13 00:52:08.997862 kubelet[1744]: E0813 00:52:08.997377 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:09.000593 env[1301]: time="2025-08-13T00:52:09.000510292Z" level=info msg="CreateContainer within sandbox \"e8de53d7cbee358c480ea2d409152361b03d6021987f085d7ea12008b02aef36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:52:09.003691 kubelet[1744]: W0813 00:52:09.003637 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.32.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:09.003901 kubelet[1744]: E0813 00:52:09.003706 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.32.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:09.022978 env[1301]: time="2025-08-13T00:52:09.022898597Z" level=info msg="CreateContainer within sandbox \"e8de53d7cbee358c480ea2d409152361b03d6021987f085d7ea12008b02aef36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d68607d6de8cbe09daafec9cdd82c3431ddc64859bdaddc390ea2bb3e991e00\"" Aug 13 00:52:09.027042 env[1301]: time="2025-08-13T00:52:09.026969197Z" level=info msg="StartContainer for \"5d68607d6de8cbe09daafec9cdd82c3431ddc64859bdaddc390ea2bb3e991e00\"" Aug 13 00:52:09.037778 env[1301]: time="2025-08-13T00:52:09.037709883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5,Uid:b09fe779cd58db34688ec22c809a2ac7,Namespace:kube-system,Attempt:0,} returns sandbox id \"663b1f7541ba638a1307341fefd2136d1d2ae89bdb3289776d5c4655d9eca3e4\"" Aug 13 00:52:09.039043 kubelet[1744]: E0813 00:52:09.038945 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:09.042224 env[1301]: time="2025-08-13T00:52:09.042140390Z" level=info msg="CreateContainer within sandbox \"663b1f7541ba638a1307341fefd2136d1d2ae89bdb3289776d5c4655d9eca3e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:52:09.068777 env[1301]: time="2025-08-13T00:52:09.068726461Z" level=info msg="CreateContainer within sandbox \"663b1f7541ba638a1307341fefd2136d1d2ae89bdb3289776d5c4655d9eca3e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f801093613bdb8bd9316033b1a6f7693f8c1fba84deb1b9be4063b46e5f9b55\"" Aug 13 00:52:09.083542 kubelet[1744]: W0813 00:52:09.083410 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.32.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:09.084348 env[1301]: time="2025-08-13T00:52:09.084295034Z" level=info msg="StartContainer for \"1f801093613bdb8bd9316033b1a6f7693f8c1fba84deb1b9be4063b46e5f9b55\"" Aug 13 00:52:09.085458 kubelet[1744]: E0813 00:52:09.085371 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.32.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:09.169331 env[1301]: time="2025-08-13T00:52:09.169164251Z" level=info msg="StartContainer for \"4773da1ee5e5f840d5584c0e60f094c21d4027f092662939de821200bba18b17\" returns successfully" Aug 13 00:52:09.174262 kubelet[1744]: W0813 00:52:09.174166 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.32.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-8-adc8b0fbd5&limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:09.174744 kubelet[1744]: E0813 00:52:09.174275 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.32.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-8-adc8b0fbd5&limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:09.186277 kubelet[1744]: E0813 00:52:09.184621 1744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.32.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-8-adc8b0fbd5?timeout=10s\": dial tcp 137.184.32.218:6443: connect: connection refused" interval="1.6s" Aug 13 00:52:09.223309 env[1301]: time="2025-08-13T00:52:09.223241868Z" level=info msg="StartContainer for \"5d68607d6de8cbe09daafec9cdd82c3431ddc64859bdaddc390ea2bb3e991e00\" returns successfully" Aug 13 00:52:09.225361 kubelet[1744]: W0813 00:52:09.225285 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.32.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.32.218:6443: connect: connection refused Aug 13 00:52:09.225577 kubelet[1744]: E0813 00:52:09.225366 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.32.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.32.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:52:09.277793 env[1301]: time="2025-08-13T00:52:09.277709355Z" level=info msg="StartContainer for \"1f801093613bdb8bd9316033b1a6f7693f8c1fba84deb1b9be4063b46e5f9b55\" returns successfully" Aug 13 00:52:09.371016 kubelet[1744]: I0813 00:52:09.370948 1744 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:09.371703 kubelet[1744]: E0813 00:52:09.371657 1744 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.32.218:6443/api/v1/nodes\": dial tcp 137.184.32.218:6443: connect: connection refused" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:09.851243 kubelet[1744]: E0813 00:52:09.851198 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:09.854143 kubelet[1744]: E0813 00:52:09.854096 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:09.856960 kubelet[1744]: E0813 00:52:09.856924 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:10.860464 kubelet[1744]: E0813 00:52:10.860292 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:10.904258 kubelet[1744]: E0813 00:52:10.904186 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:10.973935 kubelet[1744]: I0813 00:52:10.973895 1744 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:11.546782 kubelet[1744]: E0813 00:52:11.546726 1744 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-8-adc8b0fbd5\" not found" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:11.649881 kubelet[1744]: I0813 00:52:11.649835 1744 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:11.650144 kubelet[1744]: E0813 00:52:11.650126 1744 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-8-adc8b0fbd5\": node \"ci-3510.3.8-8-adc8b0fbd5\" not found" Aug 13 00:52:11.747230 kubelet[1744]: I0813 00:52:11.747188 1744 apiserver.go:52] "Watching apiserver" Aug 13 00:52:11.785865 kubelet[1744]: I0813 00:52:11.785817 1744 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:52:12.708415 kubelet[1744]: W0813 00:52:12.708342 1744 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:52:12.709106 kubelet[1744]: E0813 00:52:12.708851 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:12.863312 kubelet[1744]: E0813 00:52:12.863272 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:14.020992 systemd[1]: Reloading. Aug 13 00:52:14.128574 /usr/lib/systemd/system-generators/torcx-generator[2039]: time="2025-08-13T00:52:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:52:14.128649 /usr/lib/systemd/system-generators/torcx-generator[2039]: time="2025-08-13T00:52:14Z" level=info msg="torcx already run" Aug 13 00:52:14.285535 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:52:14.285901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:52:14.320207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:52:14.468398 systemd[1]: Stopping kubelet.service... Aug 13 00:52:14.498306 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 00:52:14.498542 kernel: audit: type=1131 audit(1755046334.491:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:14.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:14.492579 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:52:14.493080 systemd[1]: Stopped kubelet.service. Aug 13 00:52:14.502614 systemd[1]: Starting kubelet.service... Aug 13 00:52:15.798597 systemd[1]: Started kubelet.service. Aug 13 00:52:15.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:15.803665 kernel: audit: type=1130 audit(1755046335.797:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:15.949501 kubelet[2099]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:52:15.949501 kubelet[2099]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:52:15.949501 kubelet[2099]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:52:15.963720 kubelet[2099]: I0813 00:52:15.963623 2099 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:52:15.977040 kubelet[2099]: I0813 00:52:15.976955 2099 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:52:15.977040 kubelet[2099]: I0813 00:52:15.977031 2099 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:52:15.977886 kubelet[2099]: I0813 00:52:15.977839 2099 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:52:15.981195 kubelet[2099]: I0813 00:52:15.981143 2099 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:52:16.010848 kubelet[2099]: I0813 00:52:16.010781 2099 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:52:16.032310 kubelet[2099]: E0813 00:52:16.032259 2099 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:52:16.032618 kubelet[2099]: I0813 00:52:16.032586 2099 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:52:16.044168 kubelet[2099]: I0813 00:52:16.044112 2099 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:52:16.045097 kubelet[2099]: I0813 00:52:16.045068 2099 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:52:16.045531 kubelet[2099]: I0813 00:52:16.045490 2099 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:52:16.045882 kubelet[2099]: I0813 00:52:16.045630 2099 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-8-adc8b0fbd5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:52:16.046052 kubelet[2099]: I0813 00:52:16.046037 2099 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:52:16.046122 kubelet[2099]: I0813 00:52:16.046112 2099 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:52:16.046211 kubelet[2099]: I0813 00:52:16.046202 2099 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:52:16.046412 kubelet[2099]: I0813 00:52:16.046402 2099 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:52:16.047315 kubelet[2099]: I0813 00:52:16.047297 2099 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:52:16.047499 kubelet[2099]: I0813 00:52:16.047486 2099 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:52:16.047873 kubelet[2099]: I0813 00:52:16.047859 2099 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:52:16.057197 kubelet[2099]: I0813 00:52:16.056416 2099 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:52:16.058674 kubelet[2099]: I0813 00:52:16.058652 2099 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:52:16.060002 kubelet[2099]: I0813 00:52:16.059979 2099 server.go:1274] "Started kubelet" Aug 13 00:52:16.068316 kernel: audit: type=1400 audit(1755046336.061:236): avc: denied { mac_admin } for pid=2099 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:16.068672 kernel: audit: type=1401 audit(1755046336.061:236): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:16.061000 audit[2099]: AVC avc: denied { mac_admin } for pid=2099 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:16.061000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:16.061000 audit[2099]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5b9b0 a1=c000b4b650 a2=c000b5b980 a3=25 items=0 ppid=1 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:16.073550 kernel: audit: type=1300 audit(1755046336.061:236): arch=c000003e syscall=188 success=no exit=-22 a0=c000b5b9b0 a1=c000b4b650 a2=c000b5b980 a3=25 items=0 ppid=1 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:16.073956 kubelet[2099]: I0813 00:52:16.073875 2099 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:52:16.075488 kubelet[2099]: I0813 00:52:16.075463 2099 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:52:16.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:16.086351 kernel: audit: type=1327 audit(1755046336.061:236): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:16.086589 kubelet[2099]: I0813 00:52:16.086324 2099 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:52:16.086699 kubelet[2099]: I0813 00:52:16.086652 2099 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:52:16.095000 audit[2099]: AVC avc: denied { mac_admin } for pid=2099 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:16.101901 kernel: audit: type=1400 audit(1755046336.095:237): avc: denied { mac_admin } for pid=2099 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:16.101963 kernel: audit: type=1401 audit(1755046336.095:237): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:16.102001 kernel: audit: type=1300 audit(1755046336.095:237): arch=c000003e syscall=188 success=no exit=-22 a0=c000c350c0 a1=c000b4a018 a2=c000c24f60 a3=25 items=0 ppid=1 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:16.095000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:16.095000 audit[2099]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c350c0 a1=c000b4a018 a2=c000c24f60 a3=25 items=0 ppid=1 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:16.102271 kubelet[2099]: I0813 00:52:16.096488 2099 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 00:52:16.107493 kubelet[2099]: I0813 00:52:16.106537 2099 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 00:52:16.107493 kubelet[2099]: I0813 00:52:16.106651 2099 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:52:16.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:16.112470 kernel: audit: type=1327 audit(1755046336.095:237): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:16.120265 kubelet[2099]: I0813 00:52:16.118467 2099 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:52:16.121815 kubelet[2099]: I0813 00:52:16.121573 2099 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:52:16.129328 kubelet[2099]: E0813 00:52:16.123792 2099 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-8-adc8b0fbd5\" not found" Aug 13 00:52:16.129328 kubelet[2099]: I0813 00:52:16.126215 2099 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:52:16.140943 kubelet[2099]: I0813 00:52:16.137348 2099 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:52:16.152491 kubelet[2099]: E0813 00:52:16.150162 2099 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:52:16.154816 kubelet[2099]: I0813 00:52:16.154762 2099 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:52:16.155318 kubelet[2099]: I0813 00:52:16.155273 2099 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:52:16.158512 kubelet[2099]: I0813 00:52:16.158477 2099 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:52:16.189640 kubelet[2099]: I0813 00:52:16.188326 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:52:16.199309 kubelet[2099]: I0813 00:52:16.199256 2099 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:52:16.199626 kubelet[2099]: I0813 00:52:16.199603 2099 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:52:16.199799 kubelet[2099]: I0813 00:52:16.199783 2099 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:52:16.200005 kubelet[2099]: E0813 00:52:16.199976 2099 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:52:16.300314 kubelet[2099]: E0813 00:52:16.300255 2099 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:52:16.336629 kubelet[2099]: I0813 00:52:16.336592 2099 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:52:16.339252 kubelet[2099]: I0813 00:52:16.339202 2099 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:52:16.339563 kubelet[2099]: I0813 00:52:16.339546 2099 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:52:16.340029 kubelet[2099]: I0813 00:52:16.339997 2099 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:52:16.340192 kubelet[2099]: I0813 00:52:16.340144 2099 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:52:16.340349 kubelet[2099]: I0813 00:52:16.340330 2099 policy_none.go:49] "None policy: Start" Aug 13 00:52:16.341931 kubelet[2099]: I0813 00:52:16.341897 2099 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:52:16.342246 kubelet[2099]: I0813 00:52:16.342230 2099 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:52:16.342645 kubelet[2099]: I0813 00:52:16.342625 2099 state_mem.go:75] "Updated machine memory state" Aug 13 00:52:16.346797 kubelet[2099]: I0813 00:52:16.346766 2099 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:52:16.345000 audit[2099]: AVC avc: denied { mac_admin } for pid=2099 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:52:16.345000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 00:52:16.345000 audit[2099]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00108cf30 a1=c000d35398 a2=c00108cf00 a3=25 items=0 ppid=1 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:16.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 00:52:16.347495 kubelet[2099]: I0813 00:52:16.347455 2099 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 00:52:16.347837 kubelet[2099]: I0813 00:52:16.347819 2099 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:52:16.350799 kubelet[2099]: I0813 00:52:16.348052 2099 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:52:16.350799 kubelet[2099]: I0813 00:52:16.349996 2099 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:52:16.459071 kubelet[2099]: I0813 00:52:16.458990 2099 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.474326 kubelet[2099]: I0813 00:52:16.474268 2099 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.474746 kubelet[2099]: I0813 00:52:16.474710 2099 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.514873 kubelet[2099]: W0813 00:52:16.514831 2099 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:52:16.516860 kubelet[2099]: W0813 00:52:16.516821 2099 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:52:16.523019 kubelet[2099]: W0813 00:52:16.522958 2099 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:52:16.523410 kubelet[2099]: E0813 00:52:16.523369 2099 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.541773 kubelet[2099]: I0813 00:52:16.541707 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.542239 kubelet[2099]: I0813 00:52:16.542200 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.542510 kubelet[2099]: I0813 00:52:16.542483 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.542681 kubelet[2099]: I0813 00:52:16.542660 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.542788 kubelet[2099]: I0813 00:52:16.542774 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.542893 kubelet[2099]: I0813 00:52:16.542874 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d1b7258c94fbf00d829063b4e64567-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"d9d1b7258c94fbf00d829063b4e64567\") " pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.543057 kubelet[2099]: I0813 00:52:16.543034 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.543208 kubelet[2099]: I0813 00:52:16.543185 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b09fe779cd58db34688ec22c809a2ac7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"b09fe779cd58db34688ec22c809a2ac7\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.543346 kubelet[2099]: I0813 00:52:16.543327 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/12e703002e9ba6d5c7e851a907eaab56-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-8-adc8b0fbd5\" (UID: \"12e703002e9ba6d5c7e851a907eaab56\") " pod="kube-system/kube-scheduler-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:16.817505 kubelet[2099]: E0813 00:52:16.817105 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:16.818051 kubelet[2099]: E0813 00:52:16.818016 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:16.824088 kubelet[2099]: E0813 00:52:16.824042 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:17.049024 kubelet[2099]: I0813 00:52:17.048978 2099 apiserver.go:52] "Watching apiserver" Aug 13 00:52:17.127539 kubelet[2099]: I0813 00:52:17.126813 2099 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:52:17.255387 kubelet[2099]: E0813 00:52:17.255338 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:17.257120 kubelet[2099]: E0813 00:52:17.256672 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:17.268427 kubelet[2099]: W0813 00:52:17.268377 2099 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:52:17.268632 kubelet[2099]: E0813 00:52:17.268482 2099 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-8-adc8b0fbd5\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:52:17.268817 kubelet[2099]: E0813 00:52:17.268763 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:17.309967 kubelet[2099]: I0813 00:52:17.309849 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-8-adc8b0fbd5" podStartSLOduration=1.3098264130000001 podStartE2EDuration="1.309826413s" podCreationTimestamp="2025-08-13 00:52:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:17.298495743 +0000 UTC m=+1.461187135" watchObservedRunningTime="2025-08-13 00:52:17.309826413 +0000 UTC m=+1.472517799" Aug 13 00:52:17.321758 kubelet[2099]: I0813 00:52:17.321671 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-8-adc8b0fbd5" podStartSLOduration=1.3216382819999999 podStartE2EDuration="1.321638282s" podCreationTimestamp="2025-08-13 00:52:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:17.310340495 +0000 UTC m=+1.473031864" watchObservedRunningTime="2025-08-13 00:52:17.321638282 +0000 UTC m=+1.484329682" Aug 13 00:52:18.258474 kubelet[2099]: E0813 00:52:18.258388 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:18.260220 kubelet[2099]: E0813 00:52:18.260175 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:19.260387 kubelet[2099]: E0813 00:52:19.260337 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:19.786685 kubelet[2099]: I0813 00:52:19.786648 2099 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:52:19.787323 env[1301]: time="2025-08-13T00:52:19.787273992Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:52:19.788128 kubelet[2099]: I0813 00:52:19.788093 2099 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:52:20.693989 kubelet[2099]: E0813 00:52:20.693943 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:20.717402 kubelet[2099]: I0813 00:52:20.717324 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-8-adc8b0fbd5" podStartSLOduration=8.717220902 podStartE2EDuration="8.717220902s" podCreationTimestamp="2025-08-13 00:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:17.322311121 +0000 UTC m=+1.485002507" watchObservedRunningTime="2025-08-13 00:52:20.717220902 +0000 UTC m=+4.879912290" Aug 13 00:52:20.870182 kubelet[2099]: I0813 00:52:20.870104 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0b956ef-7fc4-408f-86b8-45a2caabe218-kube-proxy\") pod \"kube-proxy-thhpv\" (UID: \"e0b956ef-7fc4-408f-86b8-45a2caabe218\") " pod="kube-system/kube-proxy-thhpv" Aug 13 00:52:20.870437 kubelet[2099]: I0813 00:52:20.870197 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2c9\" (UniqueName: \"kubernetes.io/projected/e0b956ef-7fc4-408f-86b8-45a2caabe218-kube-api-access-qv2c9\") pod \"kube-proxy-thhpv\" (UID: \"e0b956ef-7fc4-408f-86b8-45a2caabe218\") " pod="kube-system/kube-proxy-thhpv" Aug 13 00:52:20.870437 kubelet[2099]: I0813 00:52:20.870243 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0b956ef-7fc4-408f-86b8-45a2caabe218-xtables-lock\") pod \"kube-proxy-thhpv\" (UID: \"e0b956ef-7fc4-408f-86b8-45a2caabe218\") " pod="kube-system/kube-proxy-thhpv" Aug 13 00:52:20.870437 kubelet[2099]: I0813 00:52:20.870268 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0b956ef-7fc4-408f-86b8-45a2caabe218-lib-modules\") pod \"kube-proxy-thhpv\" (UID: \"e0b956ef-7fc4-408f-86b8-45a2caabe218\") " pod="kube-system/kube-proxy-thhpv" Aug 13 00:52:20.971844 kubelet[2099]: I0813 00:52:20.971643 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkzp\" (UniqueName: \"kubernetes.io/projected/8c62b21c-99fd-4479-b2e2-84baff86a99c-kube-api-access-4tkzp\") pod \"tigera-operator-5bf8dfcb4-4jqqj\" (UID: \"8c62b21c-99fd-4479-b2e2-84baff86a99c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4jqqj" Aug 13 00:52:20.971844 kubelet[2099]: I0813 00:52:20.971757 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c62b21c-99fd-4479-b2e2-84baff86a99c-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-4jqqj\" (UID: \"8c62b21c-99fd-4479-b2e2-84baff86a99c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4jqqj" Aug 13 00:52:20.984357 kubelet[2099]: I0813 00:52:20.984292 2099 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:52:21.067279 kubelet[2099]: E0813 00:52:21.067222 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:21.068555 env[1301]: time="2025-08-13T00:52:21.068491659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thhpv,Uid:e0b956ef-7fc4-408f-86b8-45a2caabe218,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:21.108919 env[1301]: time="2025-08-13T00:52:21.108607804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:21.108919 env[1301]: time="2025-08-13T00:52:21.108664634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:21.108919 env[1301]: time="2025-08-13T00:52:21.108676147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:21.109339 env[1301]: time="2025-08-13T00:52:21.109267788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae78d36cf5a77108f963f341645d2949d4d4c2b89666f3a959da0f3aeda30b26 pid=2150 runtime=io.containerd.runc.v2 Aug 13 00:52:21.189866 env[1301]: time="2025-08-13T00:52:21.189801369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thhpv,Uid:e0b956ef-7fc4-408f-86b8-45a2caabe218,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae78d36cf5a77108f963f341645d2949d4d4c2b89666f3a959da0f3aeda30b26\"" Aug 13 00:52:21.192408 kubelet[2099]: E0813 00:52:21.191982 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:21.198102 env[1301]: time="2025-08-13T00:52:21.198001026Z" level=info msg="CreateContainer within sandbox \"ae78d36cf5a77108f963f341645d2949d4d4c2b89666f3a959da0f3aeda30b26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:52:21.216231 env[1301]: time="2025-08-13T00:52:21.216139483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4jqqj,Uid:8c62b21c-99fd-4479-b2e2-84baff86a99c,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:52:21.230799 env[1301]: time="2025-08-13T00:52:21.230631131Z" level=info msg="CreateContainer within sandbox \"ae78d36cf5a77108f963f341645d2949d4d4c2b89666f3a959da0f3aeda30b26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dad1496b14110ef585d7a46a537d1b6f885530c2b23cf4aaafaa58bc697f10c3\"" Aug 13 00:52:21.235205 env[1301]: time="2025-08-13T00:52:21.235136841Z" level=info msg="StartContainer for \"dad1496b14110ef585d7a46a537d1b6f885530c2b23cf4aaafaa58bc697f10c3\"" Aug 13 00:52:21.242829 env[1301]: time="2025-08-13T00:52:21.242720790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:21.242829 env[1301]: time="2025-08-13T00:52:21.242783817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:21.243079 env[1301]: time="2025-08-13T00:52:21.242796575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:21.243302 env[1301]: time="2025-08-13T00:52:21.243244002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22547e1182512f946f07c890d1f3b5586f202ca7816824f210acaae52911adce pid=2191 runtime=io.containerd.runc.v2 Aug 13 00:52:21.272127 kubelet[2099]: E0813 00:52:21.272082 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:21.388341 env[1301]: time="2025-08-13T00:52:21.387950862Z" level=info msg="StartContainer for \"dad1496b14110ef585d7a46a537d1b6f885530c2b23cf4aaafaa58bc697f10c3\" returns successfully" Aug 13 00:52:21.404247 env[1301]: time="2025-08-13T00:52:21.404183626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4jqqj,Uid:8c62b21c-99fd-4479-b2e2-84baff86a99c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"22547e1182512f946f07c890d1f3b5586f202ca7816824f210acaae52911adce\"" Aug 13 00:52:21.411892 env[1301]: time="2025-08-13T00:52:21.411840622Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:52:21.632000 audit[2289]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.634571 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 00:52:21.634713 kernel: audit: type=1325 audit(1755046341.632:239): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.632000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde51188c0 a2=0 a3=7ffde51188ac items=0 ppid=2236 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.640747 kernel: audit: type=1300 audit(1755046341.632:239): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde51188c0 a2=0 a3=7ffde51188ac items=0 ppid=2236 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:52:21.644793 kernel: audit: type=1327 audit(1755046341.632:239): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:52:21.633000 audit[2291]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.633000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea0a75070 a2=0 a3=7ffea0a7505c items=0 ppid=2236 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.653643 kernel: audit: type=1325 audit(1755046341.633:240): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.653815 kernel: audit: type=1300 audit(1755046341.633:240): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea0a75070 a2=0 a3=7ffea0a7505c items=0 ppid=2236 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:52:21.656136 kernel: audit: type=1327 audit(1755046341.633:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:52:21.636000 audit[2292]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.636000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5b1d1a00 a2=0 a3=7ffd5b1d19ec items=0 ppid=2236 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.662634 kernel: audit: type=1325 audit(1755046341.636:241): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.662797 kernel: audit: type=1300 audit(1755046341.636:241): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5b1d1a00 a2=0 a3=7ffd5b1d19ec items=0 ppid=2236 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:52:21.664901 kernel: audit: type=1327 audit(1755046341.636:241): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:52:21.643000 audit[2290]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.667025 kernel: audit: type=1325 audit(1755046341.643:242): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.643000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5219c320 a2=0 a3=7ffc5219c30c items=0 ppid=2236 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 00:52:21.657000 audit[2293]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.657000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6edc05c0 a2=0 a3=7ffd6edc05ac items=0 ppid=2236 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 00:52:21.661000 audit[2294]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.661000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0285ed30 a2=0 a3=7ffe0285ed1c items=0 ppid=2236 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 00:52:21.745000 audit[2295]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.745000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe699130c0 a2=0 a3=7ffe699130ac items=0 ppid=2236 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:52:21.753000 audit[2297]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.753000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff82543df0 a2=0 a3=7fff82543ddc items=0 ppid=2236 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 00:52:21.762000 audit[2300]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.762000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff8819f0e0 a2=0 a3=7fff8819f0cc items=0 ppid=2236 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 00:52:21.765000 audit[2301]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.765000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd51bde490 a2=0 a3=7ffd51bde47c items=0 ppid=2236 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:52:21.770000 audit[2303]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.770000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffacb2d2e0 a2=0 a3=7fffacb2d2cc items=0 ppid=2236 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:52:21.773000 audit[2304]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.773000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3ac18cf0 a2=0 a3=7ffe3ac18cdc items=0 ppid=2236 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:52:21.781000 audit[2306]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.781000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffc98d98a0 a2=0 a3=7fffc98d988c items=0 ppid=2236 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:52:21.789000 audit[2309]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.789000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc4f76900 a2=0 a3=7ffcc4f768ec items=0 ppid=2236 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 00:52:21.791000 audit[2310]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.791000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc25ae11d0 a2=0 a3=7ffc25ae11bc items=0 ppid=2236 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:52:21.797000 audit[2312]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.797000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc94e1fac0 a2=0 a3=7ffc94e1faac items=0 ppid=2236 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:52:21.799000 audit[2313]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.799000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdda42da10 a2=0 a3=7ffdda42d9fc items=0 ppid=2236 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:52:21.805000 audit[2315]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.805000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2fd10b90 a2=0 a3=7fff2fd10b7c items=0 ppid=2236 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:52:21.813000 audit[2318]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.813000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc497dab10 a2=0 a3=7ffc497daafc items=0 ppid=2236 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:52:21.820000 audit[2321]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.820000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff749c1dc0 a2=0 a3=7fff749c1dac items=0 ppid=2236 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:52:21.823000 audit[2322]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.823000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff279e06a0 a2=0 a3=7fff279e068c items=0 ppid=2236 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:52:21.828000 audit[2324]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.828000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd0a239340 a2=0 a3=7ffd0a23932c items=0 ppid=2236 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:52:21.836000 audit[2327]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.836000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd49fa1d0 a2=0 a3=7fffd49fa1bc items=0 ppid=2236 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:52:21.839000 audit[2328]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.839000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff37f3bb90 a2=0 a3=7fff37f3bb7c items=0 ppid=2236 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:52:21.844000 audit[2330]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 00:52:21.844000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe54d29d00 a2=0 a3=7ffe54d29cec items=0 ppid=2236 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:52:21.895000 audit[2336]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:21.895000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd955efc0 a2=0 a3=7ffdd955efac items=0 ppid=2236 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:21.913000 audit[2336]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:21.913000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdd955efc0 a2=0 a3=7ffdd955efac items=0 ppid=2236 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:21.915000 audit[2341]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.915000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd9cb7100 a2=0 a3=7ffdd9cb70ec items=0 ppid=2236 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 00:52:21.922000 audit[2343]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.922000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc0196e980 a2=0 a3=7ffc0196e96c items=0 ppid=2236 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 00:52:21.931000 audit[2346]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.931000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff4a916a10 a2=0 a3=7fff4a9169fc items=0 ppid=2236 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 00:52:21.934000 audit[2347]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.934000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb44a5630 a2=0 a3=7ffeb44a561c items=0 ppid=2236 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 00:52:21.939000 audit[2349]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.939000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc23c9a3b0 a2=0 a3=7ffc23c9a39c items=0 ppid=2236 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 00:52:21.942000 audit[2350]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.942000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb2b32e10 a2=0 a3=7fffb2b32dfc items=0 ppid=2236 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 00:52:21.948000 audit[2352]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.948000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff841c5c70 a2=0 a3=7fff841c5c5c items=0 ppid=2236 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 00:52:21.965000 audit[2355]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.965000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc9171ed00 a2=0 a3=7ffc9171ecec items=0 ppid=2236 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 00:52:21.969000 audit[2356]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.969000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe74f84220 a2=0 a3=7ffe74f8420c items=0 ppid=2236 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 00:52:21.975000 audit[2358]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.975000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe86874d00 a2=0 a3=7ffe86874cec items=0 ppid=2236 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 00:52:21.977000 audit[2359]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.977000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca4a8f7d0 a2=0 a3=7ffca4a8f7bc items=0 ppid=2236 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 00:52:21.982000 audit[2361]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:21.982000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc438779f0 a2=0 a3=7ffc438779dc items=0 ppid=2236 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:21.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 00:52:22.002364 systemd[1]: run-containerd-runc-k8s.io-ae78d36cf5a77108f963f341645d2949d4d4c2b89666f3a959da0f3aeda30b26-runc.iGvo7R.mount: Deactivated successfully. Aug 13 00:52:22.008000 audit[2364]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.008000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe761aca60 a2=0 a3=7ffe761aca4c items=0 ppid=2236 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 00:52:22.017000 audit[2367]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.017000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffafec82f0 a2=0 a3=7fffafec82dc items=0 ppid=2236 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 00:52:22.021000 audit[2368]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.021000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff43e96700 a2=0 a3=7fff43e966ec items=0 ppid=2236 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 00:52:22.026000 audit[2370]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.026000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd481b24a0 a2=0 a3=7ffd481b248c items=0 ppid=2236 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:52:22.034000 audit[2373]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.034000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdfc4bbac0 a2=0 a3=7ffdfc4bbaac items=0 ppid=2236 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 00:52:22.037000 audit[2374]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.037000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff29be3bf0 a2=0 a3=7fff29be3bdc items=0 ppid=2236 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 00:52:22.042000 audit[2376]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.042000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc0ade51d0 a2=0 a3=7ffc0ade51bc items=0 ppid=2236 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 00:52:22.045000 audit[2377]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.045000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5efe2950 a2=0 a3=7ffc5efe293c items=0 ppid=2236 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 00:52:22.050000 audit[2379]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.050000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea7e98dc0 a2=0 a3=7ffea7e98dac items=0 ppid=2236 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:52:22.058000 audit[2382]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 00:52:22.058000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd227336a0 a2=0 a3=7ffd2273368c items=0 ppid=2236 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 00:52:22.068000 audit[2384]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:52:22.068000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff86883f20 a2=0 a3=7fff86883f0c items=0 ppid=2236 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.068000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:22.069000 audit[2384]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 00:52:22.069000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff86883f20 a2=0 a3=7fff86883f0c items=0 ppid=2236 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:22.069000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:22.273950 kubelet[2099]: E0813 00:52:22.273793 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:23.097682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917665777.mount: Deactivated successfully. Aug 13 00:52:23.283207 kubelet[2099]: E0813 00:52:23.283124 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:24.365843 env[1301]: time="2025-08-13T00:52:24.365769940Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:24.367424 env[1301]: time="2025-08-13T00:52:24.367368958Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:24.369274 env[1301]: time="2025-08-13T00:52:24.369230209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:24.371024 env[1301]: time="2025-08-13T00:52:24.370979303Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:24.371864 env[1301]: time="2025-08-13T00:52:24.371813839Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:52:24.376025 env[1301]: time="2025-08-13T00:52:24.375976206Z" level=info msg="CreateContainer within sandbox \"22547e1182512f946f07c890d1f3b5586f202ca7816824f210acaae52911adce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:52:24.389718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239055642.mount: Deactivated successfully. Aug 13 00:52:24.399412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724748641.mount: Deactivated successfully. Aug 13 00:52:24.402271 env[1301]: time="2025-08-13T00:52:24.402211555Z" level=info msg="CreateContainer within sandbox \"22547e1182512f946f07c890d1f3b5586f202ca7816824f210acaae52911adce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"db223683792a5469fd616bfccb4dacf16fc5caaa9db7073b4247209cfc855afc\"" Aug 13 00:52:24.403279 env[1301]: time="2025-08-13T00:52:24.403241910Z" level=info msg="StartContainer for \"db223683792a5469fd616bfccb4dacf16fc5caaa9db7073b4247209cfc855afc\"" Aug 13 00:52:24.507065 env[1301]: time="2025-08-13T00:52:24.501448506Z" level=info msg="StartContainer for \"db223683792a5469fd616bfccb4dacf16fc5caaa9db7073b4247209cfc855afc\" returns successfully" Aug 13 00:52:25.302308 kubelet[2099]: I0813 00:52:25.302243 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-thhpv" podStartSLOduration=5.30221955 podStartE2EDuration="5.30221955s" podCreationTimestamp="2025-08-13 00:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:22.295367228 +0000 UTC m=+6.458058621" watchObservedRunningTime="2025-08-13 00:52:25.30221955 +0000 UTC m=+9.464910937" Aug 13 00:52:25.303165 kubelet[2099]: I0813 00:52:25.303114 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-4jqqj" podStartSLOduration=2.339665684 podStartE2EDuration="5.303090092s" podCreationTimestamp="2025-08-13 00:52:20 +0000 UTC" firstStartedPulling="2025-08-13 00:52:21.409845128 +0000 UTC m=+5.572536497" lastFinishedPulling="2025-08-13 00:52:24.373269524 +0000 UTC m=+8.535960905" observedRunningTime="2025-08-13 00:52:25.303089657 +0000 UTC m=+9.465781065" watchObservedRunningTime="2025-08-13 00:52:25.303090092 +0000 UTC m=+9.465781481" Aug 13 00:52:26.512412 kubelet[2099]: E0813 00:52:26.512363 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:27.293759 kubelet[2099]: E0813 00:52:27.293706 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:27.798969 kubelet[2099]: E0813 00:52:27.798912 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:28.306492 kubelet[2099]: E0813 00:52:28.306421 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:28.507801 update_engine[1293]: I0813 00:52:28.507694 1293 update_attempter.cc:509] Updating boot flags... Aug 13 00:52:31.350296 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 00:52:31.350526 kernel: audit: type=1325 audit(1755046351.345:290): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.345000 audit[2465]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.345000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf0f2f970 a2=0 a3=7ffcf0f2f95c items=0 ppid=2236 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.356606 kernel: audit: type=1300 audit(1755046351.345:290): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf0f2f970 a2=0 a3=7ffcf0f2f95c items=0 ppid=2236 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.361478 kernel: audit: type=1327 audit(1755046351.345:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.360000 audit[2465]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.365507 kernel: audit: type=1325 audit(1755046351.360:291): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.360000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf0f2f970 a2=0 a3=0 items=0 ppid=2236 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.373476 kernel: audit: type=1300 audit(1755046351.360:291): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf0f2f970 a2=0 a3=0 items=0 ppid=2236 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.383508 kernel: audit: type=1327 audit(1755046351.360:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.398000 audit[2467]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.408120 kernel: audit: type=1325 audit(1755046351.398:292): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.408319 kernel: audit: type=1300 audit(1755046351.398:292): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcaa54ff50 a2=0 a3=7ffcaa54ff3c items=0 ppid=2236 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.398000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcaa54ff50 a2=0 a3=7ffcaa54ff3c items=0 ppid=2236 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.416604 kernel: audit: type=1327 audit(1755046351.398:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:31.410000 audit[2467]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.421585 kernel: audit: type=1325 audit(1755046351.410:293): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:31.410000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcaa54ff50 a2=0 a3=0 items=0 ppid=2236 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:31.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:32.372147 sudo[1467]: pam_unix(sudo:session): session closed for user root Aug 13 00:52:32.370000 audit[1467]: USER_END pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:52:32.370000 audit[1467]: CRED_DISP pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 00:52:32.381019 sshd[1462]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:32.381000 audit[1462]: USER_END pid=1462 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:52:32.381000 audit[1462]: CRED_DISP pid=1462 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:52:32.386162 systemd[1]: sshd@6-137.184.32.218:22-139.178.68.195:39302.service: Deactivated successfully. Aug 13 00:52:32.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.32.218:22-139.178.68.195:39302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:52:32.387206 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:52:32.388346 systemd-logind[1292]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:52:32.390080 systemd-logind[1292]: Removed session 7. Aug 13 00:52:36.704600 kubelet[2099]: I0813 00:52:36.704527 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/953fc024-5087-4a99-a4ac-96bccffb4686-tigera-ca-bundle\") pod \"calico-typha-556cc54988-ws8vs\" (UID: \"953fc024-5087-4a99-a4ac-96bccffb4686\") " pod="calico-system/calico-typha-556cc54988-ws8vs" Aug 13 00:52:36.705190 kubelet[2099]: I0813 00:52:36.704628 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/953fc024-5087-4a99-a4ac-96bccffb4686-typha-certs\") pod \"calico-typha-556cc54988-ws8vs\" (UID: \"953fc024-5087-4a99-a4ac-96bccffb4686\") " pod="calico-system/calico-typha-556cc54988-ws8vs" Aug 13 00:52:36.705190 kubelet[2099]: I0813 00:52:36.704653 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6wj\" (UniqueName: \"kubernetes.io/projected/953fc024-5087-4a99-a4ac-96bccffb4686-kube-api-access-rf6wj\") pod \"calico-typha-556cc54988-ws8vs\" (UID: \"953fc024-5087-4a99-a4ac-96bccffb4686\") " pod="calico-system/calico-typha-556cc54988-ws8vs" Aug 13 00:52:36.909041 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:52:36.909210 kernel: audit: type=1325 audit(1755046356.898:299): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:36.898000 audit[2487]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:36.933978 kubelet[2099]: E0813 00:52:36.933913 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:36.935624 env[1301]: time="2025-08-13T00:52:36.935097067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556cc54988-ws8vs,Uid:953fc024-5087-4a99-a4ac-96bccffb4686,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:36.898000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff21714b20 a2=0 a3=7fff21714b0c items=0 ppid=2236 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:36.947584 kernel: audit: type=1300 audit(1755046356.898:299): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff21714b20 a2=0 a3=7fff21714b0c items=0 ppid=2236 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:36.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:36.955477 kernel: audit: type=1327 audit(1755046356.898:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:36.966000 audit[2487]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:36.971479 kernel: audit: type=1325 audit(1755046356.966:300): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:36.966000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21714b20 a2=0 a3=0 items=0 ppid=2236 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:36.976522 kernel: audit: type=1300 audit(1755046356.966:300): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff21714b20 a2=0 a3=0 items=0 ppid=2236 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:36.976650 env[1301]: time="2025-08-13T00:52:36.974084224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:36.976650 env[1301]: time="2025-08-13T00:52:36.974145691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:36.976650 env[1301]: time="2025-08-13T00:52:36.974157453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:36.976650 env[1301]: time="2025-08-13T00:52:36.974586535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4d6ed9b74c4a7077dd4c019fcde8a2aac3432bdea86eec1ca54a92a16f645a7 pid=2499 runtime=io.containerd.runc.v2 Aug 13 00:52:36.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:36.990528 kernel: audit: type=1327 audit(1755046356.966:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:37.019000 audit[2518]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:37.019000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc40a5da70 a2=0 a3=7ffc40a5da5c items=0 ppid=2236 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:37.042118 kernel: audit: type=1325 audit(1755046357.019:301): table=filter:95 family=2 entries=19 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:37.042600 kernel: audit: type=1300 audit(1755046357.019:301): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc40a5da70 a2=0 a3=7ffc40a5da5c items=0 ppid=2236 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:37.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:37.049499 kernel: audit: type=1327 audit(1755046357.019:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:37.044000 audit[2518]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:37.056530 kernel: audit: type=1325 audit(1755046357.044:302): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:37.044000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc40a5da70 a2=0 a3=0 items=0 ppid=2236 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:37.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:37.205200 env[1301]: time="2025-08-13T00:52:37.205124768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556cc54988-ws8vs,Uid:953fc024-5087-4a99-a4ac-96bccffb4686,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4d6ed9b74c4a7077dd4c019fcde8a2aac3432bdea86eec1ca54a92a16f645a7\"" Aug 13 00:52:37.208656 kubelet[2099]: E0813 00:52:37.206855 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:37.208656 kubelet[2099]: I0813 00:52:37.207674 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-policysync\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.208656 kubelet[2099]: I0813 00:52:37.207768 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-cni-log-dir\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.208656 kubelet[2099]: I0813 00:52:37.207846 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c28bdc-49c3-4803-b189-2d847fdbfce0-tigera-ca-bundle\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.208656 kubelet[2099]: I0813 00:52:37.207929 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56c28bdc-49c3-4803-b189-2d847fdbfce0-node-certs\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.208656 kubelet[2099]: I0813 00:52:37.207963 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-var-lib-calico\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.209689 kubelet[2099]: I0813 00:52:37.207994 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-lib-modules\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.209689 kubelet[2099]: I0813 00:52:37.208028 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-cni-bin-dir\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.209689 kubelet[2099]: I0813 00:52:37.208063 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-var-run-calico\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.209689 kubelet[2099]: I0813 00:52:37.208083 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-xtables-lock\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.209689 kubelet[2099]: I0813 00:52:37.208137 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-cni-net-dir\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.210084 kubelet[2099]: I0813 00:52:37.208178 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56c28bdc-49c3-4803-b189-2d847fdbfce0-flexvol-driver-host\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.210084 kubelet[2099]: I0813 00:52:37.208217 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tpk\" (UniqueName: \"kubernetes.io/projected/56c28bdc-49c3-4803-b189-2d847fdbfce0-kube-api-access-t7tpk\") pod \"calico-node-rg7zr\" (UID: \"56c28bdc-49c3-4803-b189-2d847fdbfce0\") " pod="calico-system/calico-node-rg7zr" Aug 13 00:52:37.215538 env[1301]: time="2025-08-13T00:52:37.211885004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:52:37.293915 kubelet[2099]: E0813 00:52:37.293812 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:37.318971 kubelet[2099]: E0813 00:52:37.318898 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.318971 kubelet[2099]: W0813 00:52:37.318949 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.318971 kubelet[2099]: E0813 00:52:37.318991 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.357181 kubelet[2099]: E0813 00:52:37.357145 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.357398 kubelet[2099]: W0813 00:52:37.357377 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.357532 kubelet[2099]: E0813 00:52:37.357511 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.389248 kubelet[2099]: E0813 00:52:37.389206 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.389248 kubelet[2099]: W0813 00:52:37.389245 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.389565 kubelet[2099]: E0813 00:52:37.389280 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.389782 kubelet[2099]: E0813 00:52:37.389747 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.389782 kubelet[2099]: W0813 00:52:37.389778 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.389907 kubelet[2099]: E0813 00:52:37.389806 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.390087 kubelet[2099]: E0813 00:52:37.390065 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.390087 kubelet[2099]: W0813 00:52:37.390082 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.390181 kubelet[2099]: E0813 00:52:37.390096 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.390409 kubelet[2099]: E0813 00:52:37.390387 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.390409 kubelet[2099]: W0813 00:52:37.390405 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.390558 kubelet[2099]: E0813 00:52:37.390422 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.390720 kubelet[2099]: E0813 00:52:37.390700 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.390720 kubelet[2099]: W0813 00:52:37.390716 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.390840 kubelet[2099]: E0813 00:52:37.390730 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.391056 kubelet[2099]: E0813 00:52:37.391037 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.391056 kubelet[2099]: W0813 00:52:37.391054 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.391171 kubelet[2099]: E0813 00:52:37.391068 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.391594 kubelet[2099]: E0813 00:52:37.391542 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.391594 kubelet[2099]: W0813 00:52:37.391594 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.391717 kubelet[2099]: E0813 00:52:37.391613 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.392391 kubelet[2099]: E0813 00:52:37.392364 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.392391 kubelet[2099]: W0813 00:52:37.392389 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.392528 kubelet[2099]: E0813 00:52:37.392408 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.393138 kubelet[2099]: E0813 00:52:37.393079 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.393138 kubelet[2099]: W0813 00:52:37.393115 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.393138 kubelet[2099]: E0813 00:52:37.393134 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.393576 kubelet[2099]: E0813 00:52:37.393546 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.393576 kubelet[2099]: W0813 00:52:37.393573 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.393712 kubelet[2099]: E0813 00:52:37.393592 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.395601 kubelet[2099]: E0813 00:52:37.395573 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.395601 kubelet[2099]: W0813 00:52:37.395594 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.395809 kubelet[2099]: E0813 00:52:37.395616 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.397712 kubelet[2099]: E0813 00:52:37.397677 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.397712 kubelet[2099]: W0813 00:52:37.397704 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.397910 kubelet[2099]: E0813 00:52:37.397731 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.399631 kubelet[2099]: E0813 00:52:37.399597 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.399631 kubelet[2099]: W0813 00:52:37.399621 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.399785 kubelet[2099]: E0813 00:52:37.399643 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.400041 kubelet[2099]: E0813 00:52:37.400020 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.400041 kubelet[2099]: W0813 00:52:37.400035 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.400145 kubelet[2099]: E0813 00:52:37.400055 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.400346 kubelet[2099]: E0813 00:52:37.400328 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.400397 kubelet[2099]: W0813 00:52:37.400343 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.400397 kubelet[2099]: E0813 00:52:37.400367 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.401067 kubelet[2099]: E0813 00:52:37.401032 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.401067 kubelet[2099]: W0813 00:52:37.401069 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.401189 kubelet[2099]: E0813 00:52:37.401085 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.401726 kubelet[2099]: E0813 00:52:37.401705 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.401726 kubelet[2099]: W0813 00:52:37.401720 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.401843 kubelet[2099]: E0813 00:52:37.401738 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.401986 kubelet[2099]: E0813 00:52:37.401970 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.401986 kubelet[2099]: W0813 00:52:37.401981 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.402079 kubelet[2099]: E0813 00:52:37.401999 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.402212 kubelet[2099]: E0813 00:52:37.402197 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.402270 kubelet[2099]: W0813 00:52:37.402213 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.402270 kubelet[2099]: E0813 00:52:37.402223 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.402436 kubelet[2099]: E0813 00:52:37.402420 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.402436 kubelet[2099]: W0813 00:52:37.402431 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.402436 kubelet[2099]: E0813 00:52:37.402458 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.410172 kubelet[2099]: E0813 00:52:37.410125 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.410404 kubelet[2099]: W0813 00:52:37.410382 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.410512 kubelet[2099]: E0813 00:52:37.410496 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.410627 kubelet[2099]: I0813 00:52:37.410611 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e65125c0-f7bb-420d-885a-928dd8165be9-registration-dir\") pod \"csi-node-driver-n8qkv\" (UID: \"e65125c0-f7bb-420d-885a-928dd8165be9\") " pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:37.410993 kubelet[2099]: E0813 00:52:37.410962 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.410993 kubelet[2099]: W0813 00:52:37.410987 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.411113 kubelet[2099]: E0813 00:52:37.411010 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.411239 kubelet[2099]: E0813 00:52:37.411222 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.411239 kubelet[2099]: W0813 00:52:37.411235 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.411343 kubelet[2099]: E0813 00:52:37.411246 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.411556 kubelet[2099]: E0813 00:52:37.411538 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.411556 kubelet[2099]: W0813 00:52:37.411553 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.411654 kubelet[2099]: E0813 00:52:37.411565 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.411654 kubelet[2099]: I0813 00:52:37.411595 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e65125c0-f7bb-420d-885a-928dd8165be9-kubelet-dir\") pod \"csi-node-driver-n8qkv\" (UID: \"e65125c0-f7bb-420d-885a-928dd8165be9\") " pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:37.411898 kubelet[2099]: E0813 00:52:37.411871 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.411898 kubelet[2099]: W0813 00:52:37.411891 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.412039 kubelet[2099]: E0813 00:52:37.411912 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.412039 kubelet[2099]: I0813 00:52:37.411942 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e65125c0-f7bb-420d-885a-928dd8165be9-varrun\") pod \"csi-node-driver-n8qkv\" (UID: \"e65125c0-f7bb-420d-885a-928dd8165be9\") " pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:37.412272 kubelet[2099]: E0813 00:52:37.412135 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.412272 kubelet[2099]: W0813 00:52:37.412149 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.412272 kubelet[2099]: E0813 00:52:37.412165 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.412272 kubelet[2099]: I0813 00:52:37.412194 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e65125c0-f7bb-420d-885a-928dd8165be9-socket-dir\") pod \"csi-node-driver-n8qkv\" (UID: \"e65125c0-f7bb-420d-885a-928dd8165be9\") " pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:37.412536 kubelet[2099]: E0813 00:52:37.412450 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.412536 kubelet[2099]: W0813 00:52:37.412462 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.412536 kubelet[2099]: E0813 00:52:37.412477 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.412536 kubelet[2099]: I0813 00:52:37.412494 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns674\" (UniqueName: \"kubernetes.io/projected/e65125c0-f7bb-420d-885a-928dd8165be9-kube-api-access-ns674\") pod \"csi-node-driver-n8qkv\" (UID: \"e65125c0-f7bb-420d-885a-928dd8165be9\") " pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.412720 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414543 kubelet[2099]: W0813 00:52:37.412739 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.412755 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.412950 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414543 kubelet[2099]: W0813 00:52:37.412961 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.413047 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.413208 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414543 kubelet[2099]: W0813 00:52:37.413219 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.413303 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414543 kubelet[2099]: E0813 00:52:37.413519 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414980 kubelet[2099]: W0813 00:52:37.413531 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.413617 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.413730 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414980 kubelet[2099]: W0813 00:52:37.413739 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.413803 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.413943 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414980 kubelet[2099]: W0813 00:52:37.413951 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.413962 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.414980 kubelet[2099]: E0813 00:52:37.414183 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.414980 kubelet[2099]: W0813 00:52:37.414193 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.415421 kubelet[2099]: E0813 00:52:37.414203 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.415421 kubelet[2099]: E0813 00:52:37.414386 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.415421 kubelet[2099]: W0813 00:52:37.414395 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.415421 kubelet[2099]: E0813 00:52:37.414405 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.478745 env[1301]: time="2025-08-13T00:52:37.478181143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rg7zr,Uid:56c28bdc-49c3-4803-b189-2d847fdbfce0,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:37.509346 env[1301]: time="2025-08-13T00:52:37.509269661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:37.509603 env[1301]: time="2025-08-13T00:52:37.509573581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:37.509706 env[1301]: time="2025-08-13T00:52:37.509682808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:37.509996 env[1301]: time="2025-08-13T00:52:37.509962478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11 pid=2586 runtime=io.containerd.runc.v2 Aug 13 00:52:37.513633 kubelet[2099]: E0813 00:52:37.513592 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.513633 kubelet[2099]: W0813 00:52:37.513623 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.513872 kubelet[2099]: E0813 00:52:37.513654 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.514063 kubelet[2099]: E0813 00:52:37.514042 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.514063 kubelet[2099]: W0813 00:52:37.514062 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.514174 kubelet[2099]: E0813 00:52:37.514087 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.514426 kubelet[2099]: E0813 00:52:37.514405 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.514426 kubelet[2099]: W0813 00:52:37.514424 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.514553 kubelet[2099]: E0813 00:52:37.514456 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.514740 kubelet[2099]: E0813 00:52:37.514723 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.514740 kubelet[2099]: W0813 00:52:37.514740 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.514832 kubelet[2099]: E0813 00:52:37.514761 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.515102 kubelet[2099]: E0813 00:52:37.515080 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.515102 kubelet[2099]: W0813 00:52:37.515100 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.515319 kubelet[2099]: E0813 00:52:37.515245 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.515381 kubelet[2099]: E0813 00:52:37.515361 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.515426 kubelet[2099]: W0813 00:52:37.515381 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.515492 kubelet[2099]: E0813 00:52:37.515474 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.515718 kubelet[2099]: E0813 00:52:37.515698 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.515773 kubelet[2099]: W0813 00:52:37.515720 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.515851 kubelet[2099]: E0813 00:52:37.515820 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.516001 kubelet[2099]: E0813 00:52:37.515982 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.516051 kubelet[2099]: W0813 00:52:37.516001 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.516126 kubelet[2099]: E0813 00:52:37.516108 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.516289 kubelet[2099]: E0813 00:52:37.516272 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.516341 kubelet[2099]: W0813 00:52:37.516289 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.516434 kubelet[2099]: E0813 00:52:37.516398 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.516552 kubelet[2099]: E0813 00:52:37.516536 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.516552 kubelet[2099]: W0813 00:52:37.516551 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.516632 kubelet[2099]: E0813 00:52:37.516570 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.516894 kubelet[2099]: E0813 00:52:37.516872 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.516894 kubelet[2099]: W0813 00:52:37.516892 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.517076 kubelet[2099]: E0813 00:52:37.517048 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.519808 kubelet[2099]: E0813 00:52:37.519762 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.519808 kubelet[2099]: W0813 00:52:37.519800 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.520070 kubelet[2099]: E0813 00:52:37.520032 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.520149 kubelet[2099]: E0813 00:52:37.520129 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.520193 kubelet[2099]: W0813 00:52:37.520150 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.520295 kubelet[2099]: E0813 00:52:37.520273 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.522797 kubelet[2099]: E0813 00:52:37.522755 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.522797 kubelet[2099]: W0813 00:52:37.522788 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.523060 kubelet[2099]: E0813 00:52:37.523018 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.523131 kubelet[2099]: E0813 00:52:37.523087 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.523131 kubelet[2099]: W0813 00:52:37.523100 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.523232 kubelet[2099]: E0813 00:52:37.523208 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.530743 kubelet[2099]: E0813 00:52:37.530660 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.530743 kubelet[2099]: W0813 00:52:37.530701 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531088 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531097 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.531849 kubelet[2099]: W0813 00:52:37.531159 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531428 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.531849 kubelet[2099]: W0813 00:52:37.531500 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531518 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531603 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.531849 kubelet[2099]: E0813 00:52:37.531750 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.531849 kubelet[2099]: W0813 00:52:37.531761 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.533613 kubelet[2099]: E0813 00:52:37.533566 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.539941 kubelet[2099]: E0813 00:52:37.533910 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.539941 kubelet[2099]: W0813 00:52:37.533933 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.539941 kubelet[2099]: E0813 00:52:37.534211 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.539941 kubelet[2099]: W0813 00:52:37.534222 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.539941 kubelet[2099]: E0813 00:52:37.534420 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.539941 kubelet[2099]: W0813 00:52:37.534428 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.539941 kubelet[2099]: E0813 00:52:37.534617 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.539941 kubelet[2099]: W0813 00:52:37.534625 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.539941 kubelet[2099]: E0813 00:52:37.534853 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.539941 kubelet[2099]: W0813 00:52:37.534869 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.540473 kubelet[2099]: E0813 00:52:37.534890 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.540473 kubelet[2099]: E0813 00:52:37.534968 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.540473 kubelet[2099]: E0813 00:52:37.535514 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.540473 kubelet[2099]: E0813 00:52:37.535566 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.540473 kubelet[2099]: E0813 00:52:37.535862 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.540884 kubelet[2099]: E0813 00:52:37.540755 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.540884 kubelet[2099]: W0813 00:52:37.540785 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.540884 kubelet[2099]: E0813 00:52:37.540823 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.566523 kubelet[2099]: E0813 00:52:37.566423 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:37.566797 kubelet[2099]: W0813 00:52:37.566771 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:37.566959 kubelet[2099]: E0813 00:52:37.566921 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:37.662792 env[1301]: time="2025-08-13T00:52:37.662686412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rg7zr,Uid:56c28bdc-49c3-4803-b189-2d847fdbfce0,Namespace:calico-system,Attempt:0,} returns sandbox id \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\"" Aug 13 00:52:38.076000 audit[2649]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:38.076000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd62a48640 a2=0 a3=7ffd62a4862c items=0 ppid=2236 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:38.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:38.081000 audit[2649]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2649 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:52:38.081000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd62a48640 a2=0 a3=0 items=0 ppid=2236 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:52:38.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:52:38.651622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453345282.mount: Deactivated successfully. Aug 13 00:52:39.200711 kubelet[2099]: E0813 00:52:39.200655 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:40.070589 env[1301]: time="2025-08-13T00:52:40.070533555Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:40.072891 env[1301]: time="2025-08-13T00:52:40.072833626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:40.075328 env[1301]: time="2025-08-13T00:52:40.075278999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:40.077409 env[1301]: time="2025-08-13T00:52:40.077351239Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:40.080084 env[1301]: time="2025-08-13T00:52:40.080009511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:52:40.082366 env[1301]: time="2025-08-13T00:52:40.082294423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:52:40.117327 env[1301]: time="2025-08-13T00:52:40.117181028Z" level=info msg="CreateContainer within sandbox \"f4d6ed9b74c4a7077dd4c019fcde8a2aac3432bdea86eec1ca54a92a16f645a7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:52:40.134179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874375440.mount: Deactivated successfully. Aug 13 00:52:40.143879 env[1301]: time="2025-08-13T00:52:40.143335888Z" level=info msg="CreateContainer within sandbox \"f4d6ed9b74c4a7077dd4c019fcde8a2aac3432bdea86eec1ca54a92a16f645a7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed1eba68c2ebf390e3358e66a51e2844c8ff55bf84f11a4c699d5606ae422083\"" Aug 13 00:52:40.147374 env[1301]: time="2025-08-13T00:52:40.146738987Z" level=info msg="StartContainer for \"ed1eba68c2ebf390e3358e66a51e2844c8ff55bf84f11a4c699d5606ae422083\"" Aug 13 00:52:40.360008 env[1301]: time="2025-08-13T00:52:40.359857136Z" level=info msg="StartContainer for \"ed1eba68c2ebf390e3358e66a51e2844c8ff55bf84f11a4c699d5606ae422083\" returns successfully" Aug 13 00:52:41.201308 kubelet[2099]: E0813 00:52:41.201198 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:41.338617 kubelet[2099]: E0813 00:52:41.338054 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:41.338617 kubelet[2099]: E0813 00:52:41.338217 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.338617 kubelet[2099]: W0813 00:52:41.338250 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.338617 kubelet[2099]: E0813 00:52:41.338276 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.338617 kubelet[2099]: E0813 00:52:41.338603 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.338617 kubelet[2099]: W0813 00:52:41.338641 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.339149 kubelet[2099]: E0813 00:52:41.338661 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.339149 kubelet[2099]: E0813 00:52:41.339010 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.339149 kubelet[2099]: W0813 00:52:41.339025 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.339149 kubelet[2099]: E0813 00:52:41.339041 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.339330 kubelet[2099]: E0813 00:52:41.339267 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.339330 kubelet[2099]: W0813 00:52:41.339285 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.339330 kubelet[2099]: E0813 00:52:41.339302 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.339561 kubelet[2099]: E0813 00:52:41.339543 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.339561 kubelet[2099]: W0813 00:52:41.339557 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.339561 kubelet[2099]: E0813 00:52:41.339569 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.339849 kubelet[2099]: E0813 00:52:41.339827 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.339849 kubelet[2099]: W0813 00:52:41.339846 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.340002 kubelet[2099]: E0813 00:52:41.339862 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.340175 kubelet[2099]: E0813 00:52:41.340133 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.340175 kubelet[2099]: W0813 00:52:41.340150 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.340175 kubelet[2099]: E0813 00:52:41.340164 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.340436 kubelet[2099]: E0813 00:52:41.340420 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.340436 kubelet[2099]: W0813 00:52:41.340435 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.343914 kubelet[2099]: E0813 00:52:41.340462 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.346420 kubelet[2099]: E0813 00:52:41.345413 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.346420 kubelet[2099]: W0813 00:52:41.345695 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.346420 kubelet[2099]: E0813 00:52:41.345737 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.347010 kubelet[2099]: E0813 00:52:41.346833 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.347010 kubelet[2099]: W0813 00:52:41.346855 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.347010 kubelet[2099]: E0813 00:52:41.346879 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.347308 kubelet[2099]: E0813 00:52:41.347290 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.347477 kubelet[2099]: W0813 00:52:41.347457 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.348459 kubelet[2099]: E0813 00:52:41.348414 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.349023 kubelet[2099]: E0813 00:52:41.349002 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.349179 kubelet[2099]: W0813 00:52:41.349158 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.349288 kubelet[2099]: E0813 00:52:41.349269 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.349853 kubelet[2099]: E0813 00:52:41.349832 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.349990 kubelet[2099]: W0813 00:52:41.349969 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.350110 kubelet[2099]: E0813 00:52:41.350089 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.350742 kubelet[2099]: E0813 00:52:41.350692 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.350954 kubelet[2099]: W0813 00:52:41.350932 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.352662 kubelet[2099]: E0813 00:52:41.351122 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.357963 kubelet[2099]: E0813 00:52:41.357915 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.358285 kubelet[2099]: W0813 00:52:41.358254 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.358417 kubelet[2099]: E0813 00:52:41.358398 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.360105 kubelet[2099]: E0813 00:52:41.360072 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.360506 kubelet[2099]: W0813 00:52:41.360477 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.360713 kubelet[2099]: E0813 00:52:41.360688 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.361386 kubelet[2099]: E0813 00:52:41.361362 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.361603 kubelet[2099]: W0813 00:52:41.361580 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.361741 kubelet[2099]: E0813 00:52:41.361726 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.364819 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.368541 kubelet[2099]: W0813 00:52:41.364854 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.364887 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.365411 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.368541 kubelet[2099]: W0813 00:52:41.365433 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.365662 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.366087 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.368541 kubelet[2099]: W0813 00:52:41.366101 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.366119 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.368541 kubelet[2099]: E0813 00:52:41.366973 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369025 kubelet[2099]: W0813 00:52:41.366991 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.367013 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.367566 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369025 kubelet[2099]: W0813 00:52:41.367580 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.367654 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.367920 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369025 kubelet[2099]: W0813 00:52:41.367932 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.367948 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369025 kubelet[2099]: E0813 00:52:41.368355 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369025 kubelet[2099]: W0813 00:52:41.368370 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.368428 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.368729 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369350 kubelet[2099]: W0813 00:52:41.368741 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.368757 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.368985 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369350 kubelet[2099]: W0813 00:52:41.368997 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.369010 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.369304 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.369350 kubelet[2099]: W0813 00:52:41.369320 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.369350 kubelet[2099]: E0813 00:52:41.369334 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.370258 kubelet[2099]: E0813 00:52:41.370125 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.370258 kubelet[2099]: W0813 00:52:41.370206 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.370258 kubelet[2099]: E0813 00:52:41.370223 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.370512 kubelet[2099]: E0813 00:52:41.370492 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.370512 kubelet[2099]: W0813 00:52:41.370507 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.370616 kubelet[2099]: E0813 00:52:41.370520 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.370765 kubelet[2099]: E0813 00:52:41.370744 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.370765 kubelet[2099]: W0813 00:52:41.370762 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.370855 kubelet[2099]: E0813 00:52:41.370774 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.370974 kubelet[2099]: E0813 00:52:41.370955 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.370974 kubelet[2099]: W0813 00:52:41.370969 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.371073 kubelet[2099]: E0813 00:52:41.370980 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.371236 kubelet[2099]: E0813 00:52:41.371215 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.371236 kubelet[2099]: W0813 00:52:41.371230 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.371358 kubelet[2099]: E0813 00:52:41.371241 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.372278 kubelet[2099]: E0813 00:52:41.372241 2099 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:52:41.372278 kubelet[2099]: W0813 00:52:41.372266 2099 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:52:41.372544 kubelet[2099]: E0813 00:52:41.372290 2099 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:52:41.373749 kubelet[2099]: I0813 00:52:41.372677 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-556cc54988-ws8vs" podStartSLOduration=2.503082977 podStartE2EDuration="5.372653254s" podCreationTimestamp="2025-08-13 00:52:36 +0000 UTC" firstStartedPulling="2025-08-13 00:52:37.211434376 +0000 UTC m=+21.374125745" lastFinishedPulling="2025-08-13 00:52:40.081004653 +0000 UTC m=+24.243696022" observedRunningTime="2025-08-13 00:52:41.371482784 +0000 UTC m=+25.534174194" watchObservedRunningTime="2025-08-13 00:52:41.372653254 +0000 UTC m=+25.535344644" Aug 13 00:52:41.733973 env[1301]: time="2025-08-13T00:52:41.733901285Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:41.735614 env[1301]: time="2025-08-13T00:52:41.735552306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:41.740494 env[1301]: time="2025-08-13T00:52:41.739369759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:41.742067 env[1301]: time="2025-08-13T00:52:41.741988101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:52:41.742252 env[1301]: time="2025-08-13T00:52:41.741045958Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:41.748685 env[1301]: time="2025-08-13T00:52:41.748625518Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:52:41.773222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314400671.mount: Deactivated successfully. Aug 13 00:52:41.779078 env[1301]: time="2025-08-13T00:52:41.778999143Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9\"" Aug 13 00:52:41.780858 env[1301]: time="2025-08-13T00:52:41.780806382Z" level=info msg="StartContainer for \"1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9\"" Aug 13 00:52:41.881209 env[1301]: time="2025-08-13T00:52:41.881142486Z" level=info msg="StartContainer for \"1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9\" returns successfully" Aug 13 00:52:41.944966 env[1301]: time="2025-08-13T00:52:41.944881717Z" level=info msg="shim disconnected" id=1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9 Aug 13 00:52:41.945584 env[1301]: time="2025-08-13T00:52:41.945539195Z" level=warning msg="cleaning up after shim disconnected" id=1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9 namespace=k8s.io Aug 13 00:52:41.945806 env[1301]: time="2025-08-13T00:52:41.945778252Z" level=info msg="cleaning up dead shim" Aug 13 00:52:41.961296 env[1301]: time="2025-08-13T00:52:41.961230804Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:52:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2771 runtime=io.containerd.runc.v2\n" Aug 13 00:52:42.094208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f1ffd6430bb1cf5561680aaec6786807d1f59649497670af8598739cf987cb9-rootfs.mount: Deactivated successfully. Aug 13 00:52:42.342913 kubelet[2099]: I0813 00:52:42.342764 2099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:52:42.343510 kubelet[2099]: E0813 00:52:42.343418 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:42.345580 env[1301]: time="2025-08-13T00:52:42.345416917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:52:43.201404 kubelet[2099]: E0813 00:52:43.201328 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:45.200686 kubelet[2099]: E0813 00:52:45.200415 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:46.632777 env[1301]: time="2025-08-13T00:52:46.632709722Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:46.634627 env[1301]: time="2025-08-13T00:52:46.634576793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:46.637316 env[1301]: time="2025-08-13T00:52:46.637268754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:46.641002 env[1301]: time="2025-08-13T00:52:46.640933274Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:46.643775 env[1301]: time="2025-08-13T00:52:46.643704579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:52:46.650318 env[1301]: time="2025-08-13T00:52:46.650004421Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:52:46.678324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312827422.mount: Deactivated successfully. Aug 13 00:52:46.684143 env[1301]: time="2025-08-13T00:52:46.684064220Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab\"" Aug 13 00:52:46.687644 env[1301]: time="2025-08-13T00:52:46.687586062Z" level=info msg="StartContainer for \"db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab\"" Aug 13 00:52:46.745032 systemd[1]: run-containerd-runc-k8s.io-db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab-runc.Q2vAON.mount: Deactivated successfully. Aug 13 00:52:46.805252 env[1301]: time="2025-08-13T00:52:46.805193235Z" level=info msg="StartContainer for \"db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab\" returns successfully" Aug 13 00:52:47.201183 kubelet[2099]: E0813 00:52:47.201080 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:47.605762 env[1301]: time="2025-08-13T00:52:47.605676909Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:52:47.644958 env[1301]: time="2025-08-13T00:52:47.644839678Z" level=info msg="shim disconnected" id=db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab Aug 13 00:52:47.644958 env[1301]: time="2025-08-13T00:52:47.644959457Z" level=warning msg="cleaning up after shim disconnected" id=db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab namespace=k8s.io Aug 13 00:52:47.645892 env[1301]: time="2025-08-13T00:52:47.644975174Z" level=info msg="cleaning up dead shim" Aug 13 00:52:47.657971 env[1301]: time="2025-08-13T00:52:47.657898108Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2840 runtime=io.containerd.runc.v2\n" Aug 13 00:52:47.673240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db7ecb90a3ae75551bb6c04f49b29d5f26a0ae8f1a180e430582c14b183202ab-rootfs.mount: Deactivated successfully. Aug 13 00:52:47.682255 kubelet[2099]: I0813 00:52:47.681366 2099 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:52:47.753791 kubelet[2099]: W0813 00:52:47.753739 2099 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.8-8-adc8b0fbd5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object Aug 13 00:52:47.754141 kubelet[2099]: E0813 00:52:47.754115 2099 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.8-8-adc8b0fbd5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object" logger="UnhandledError" Aug 13 00:52:47.758748 kubelet[2099]: W0813 00:52:47.758688 2099 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-8-adc8b0fbd5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object Aug 13 00:52:47.760132 kubelet[2099]: E0813 00:52:47.760088 2099 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-8-adc8b0fbd5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object" logger="UnhandledError" Aug 13 00:52:47.760537 kubelet[2099]: W0813 00:52:47.760500 2099 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510.3.8-8-adc8b0fbd5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object Aug 13 00:52:47.760744 kubelet[2099]: E0813 00:52:47.760713 2099 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-3510.3.8-8-adc8b0fbd5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510.3.8-8-adc8b0fbd5' and this object" logger="UnhandledError" Aug 13 00:52:47.825632 kubelet[2099]: I0813 00:52:47.825591 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shw8c\" (UniqueName: \"kubernetes.io/projected/5af231c3-7046-4023-9f1c-637c842bb333-kube-api-access-shw8c\") pod \"coredns-7c65d6cfc9-5846w\" (UID: \"5af231c3-7046-4023-9f1c-637c842bb333\") " pod="kube-system/coredns-7c65d6cfc9-5846w" Aug 13 00:52:47.825906 kubelet[2099]: I0813 00:52:47.825887 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca17bcc4-9279-4936-bb62-b2a432984a63-tigera-ca-bundle\") pod \"calico-kube-controllers-6d798fdc4f-v55r9\" (UID: \"ca17bcc4-9279-4936-bb62-b2a432984a63\") " pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" Aug 13 00:52:47.826251 kubelet[2099]: I0813 00:52:47.826230 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5af231c3-7046-4023-9f1c-637c842bb333-config-volume\") pod \"coredns-7c65d6cfc9-5846w\" (UID: \"5af231c3-7046-4023-9f1c-637c842bb333\") " pod="kube-system/coredns-7c65d6cfc9-5846w" Aug 13 00:52:47.826525 kubelet[2099]: I0813 00:52:47.826487 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kfz\" (UniqueName: \"kubernetes.io/projected/ca17bcc4-9279-4936-bb62-b2a432984a63-kube-api-access-f7kfz\") pod \"calico-kube-controllers-6d798fdc4f-v55r9\" (UID: \"ca17bcc4-9279-4936-bb62-b2a432984a63\") " pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" Aug 13 00:52:47.928486 kubelet[2099]: I0813 00:52:47.928292 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3eb358bf-331f-4984-a4e4-9d6d55f60ba0-calico-apiserver-certs\") pod \"calico-apiserver-6c77cb5bfc-pg6q9\" (UID: \"3eb358bf-331f-4984-a4e4-9d6d55f60ba0\") " pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" Aug 13 00:52:47.929681 kubelet[2099]: I0813 00:52:47.929644 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n289\" (UniqueName: \"kubernetes.io/projected/b1be0258-fed9-4a06-957e-38cfc4092975-kube-api-access-4n289\") pod \"whisker-6b78684bd4-f847k\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " pod="calico-system/whisker-6b78684bd4-f847k" Aug 13 00:52:47.930068 kubelet[2099]: I0813 00:52:47.930027 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r55l\" (UniqueName: \"kubernetes.io/projected/3eb358bf-331f-4984-a4e4-9d6d55f60ba0-kube-api-access-5r55l\") pod \"calico-apiserver-6c77cb5bfc-pg6q9\" (UID: \"3eb358bf-331f-4984-a4e4-9d6d55f60ba0\") " pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" Aug 13 00:52:47.930216 kubelet[2099]: I0813 00:52:47.930069 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-backend-key-pair\") pod \"whisker-6b78684bd4-f847k\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " pod="calico-system/whisker-6b78684bd4-f847k" Aug 13 00:52:47.930216 kubelet[2099]: I0813 00:52:47.930094 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6dmd\" (UniqueName: \"kubernetes.io/projected/286d844b-f8f2-4cbc-961c-669a123d9626-kube-api-access-h6dmd\") pod \"coredns-7c65d6cfc9-vhxc6\" (UID: \"286d844b-f8f2-4cbc-961c-669a123d9626\") " pod="kube-system/coredns-7c65d6cfc9-vhxc6" Aug 13 00:52:47.930216 kubelet[2099]: I0813 00:52:47.930143 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-ca-bundle\") pod \"whisker-6b78684bd4-f847k\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " pod="calico-system/whisker-6b78684bd4-f847k" Aug 13 00:52:47.930216 kubelet[2099]: I0813 00:52:47.930166 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gr9k\" (UniqueName: \"kubernetes.io/projected/167dde7c-6e36-48ea-bd63-42e66d6a64d2-kube-api-access-6gr9k\") pod \"calico-apiserver-6c77cb5bfc-vlzdm\" (UID: \"167dde7c-6e36-48ea-bd63-42e66d6a64d2\") " pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" Aug 13 00:52:47.930216 kubelet[2099]: I0813 00:52:47.930197 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e647cce1-5592-4786-90e1-64c87a11f433-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qzqlj\" (UID: \"e647cce1-5592-4786-90e1-64c87a11f433\") " pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:47.930407 kubelet[2099]: I0813 00:52:47.930241 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/167dde7c-6e36-48ea-bd63-42e66d6a64d2-calico-apiserver-certs\") pod \"calico-apiserver-6c77cb5bfc-vlzdm\" (UID: \"167dde7c-6e36-48ea-bd63-42e66d6a64d2\") " pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" Aug 13 00:52:47.930407 kubelet[2099]: I0813 00:52:47.930270 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7rj\" (UniqueName: \"kubernetes.io/projected/e647cce1-5592-4786-90e1-64c87a11f433-kube-api-access-cr7rj\") pod \"goldmane-58fd7646b9-qzqlj\" (UID: \"e647cce1-5592-4786-90e1-64c87a11f433\") " pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:47.930407 kubelet[2099]: I0813 00:52:47.930305 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e647cce1-5592-4786-90e1-64c87a11f433-config\") pod \"goldmane-58fd7646b9-qzqlj\" (UID: \"e647cce1-5592-4786-90e1-64c87a11f433\") " pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:47.930407 kubelet[2099]: I0813 00:52:47.930334 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e647cce1-5592-4786-90e1-64c87a11f433-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qzqlj\" (UID: \"e647cce1-5592-4786-90e1-64c87a11f433\") " pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:47.930407 kubelet[2099]: I0813 00:52:47.930357 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/286d844b-f8f2-4cbc-961c-669a123d9626-config-volume\") pod \"coredns-7c65d6cfc9-vhxc6\" (UID: \"286d844b-f8f2-4cbc-961c-669a123d9626\") " pod="kube-system/coredns-7c65d6cfc9-vhxc6" Aug 13 00:52:48.060725 env[1301]: time="2025-08-13T00:52:48.060631755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d798fdc4f-v55r9,Uid:ca17bcc4-9279-4936-bb62-b2a432984a63,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:48.276896 env[1301]: time="2025-08-13T00:52:48.276711312Z" level=error msg="Failed to destroy network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.278070 env[1301]: time="2025-08-13T00:52:48.278000100Z" level=error msg="encountered an error cleaning up failed sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.278252 env[1301]: time="2025-08-13T00:52:48.278079971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d798fdc4f-v55r9,Uid:ca17bcc4-9279-4936-bb62-b2a432984a63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.278666 kubelet[2099]: E0813 00:52:48.278562 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.281198 kubelet[2099]: E0813 00:52:48.278671 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" Aug 13 00:52:48.281198 kubelet[2099]: E0813 00:52:48.278704 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" Aug 13 00:52:48.281198 kubelet[2099]: E0813 00:52:48.278763 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d798fdc4f-v55r9_calico-system(ca17bcc4-9279-4936-bb62-b2a432984a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d798fdc4f-v55r9_calico-system(ca17bcc4-9279-4936-bb62-b2a432984a63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" podUID="ca17bcc4-9279-4936-bb62-b2a432984a63" Aug 13 00:52:48.354356 env[1301]: time="2025-08-13T00:52:48.354278955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qzqlj,Uid:e647cce1-5592-4786-90e1-64c87a11f433,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:48.367755 env[1301]: time="2025-08-13T00:52:48.364813437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:52:48.367755 env[1301]: time="2025-08-13T00:52:48.367215500Z" level=info msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" Aug 13 00:52:48.373681 kubelet[2099]: I0813 00:52:48.366373 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:52:48.373752 env[1301]: time="2025-08-13T00:52:48.369669788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b78684bd4-f847k,Uid:b1be0258-fed9-4a06-957e-38cfc4092975,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:48.445341 env[1301]: time="2025-08-13T00:52:48.445278671Z" level=error msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" failed" error="failed to destroy network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.446169 kubelet[2099]: E0813 00:52:48.445847 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:52:48.446169 kubelet[2099]: E0813 00:52:48.445914 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2"} Aug 13 00:52:48.446169 kubelet[2099]: E0813 00:52:48.445985 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca17bcc4-9279-4936-bb62-b2a432984a63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:48.446169 kubelet[2099]: E0813 00:52:48.446010 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca17bcc4-9279-4936-bb62-b2a432984a63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" podUID="ca17bcc4-9279-4936-bb62-b2a432984a63" Aug 13 00:52:48.499917 env[1301]: time="2025-08-13T00:52:48.499828064Z" level=error msg="Failed to destroy network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.500533 env[1301]: time="2025-08-13T00:52:48.500494140Z" level=error msg="encountered an error cleaning up failed sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.501670 env[1301]: time="2025-08-13T00:52:48.500679470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qzqlj,Uid:e647cce1-5592-4786-90e1-64c87a11f433,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.501826 kubelet[2099]: E0813 00:52:48.500959 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.501826 kubelet[2099]: E0813 00:52:48.501059 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:48.501826 kubelet[2099]: E0813 00:52:48.501082 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qzqlj" Aug 13 00:52:48.502010 kubelet[2099]: E0813 00:52:48.501132 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qzqlj_calico-system(e647cce1-5592-4786-90e1-64c87a11f433)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qzqlj_calico-system(e647cce1-5592-4786-90e1-64c87a11f433)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qzqlj" podUID="e647cce1-5592-4786-90e1-64c87a11f433" Aug 13 00:52:48.521987 env[1301]: time="2025-08-13T00:52:48.521921668Z" level=error msg="Failed to destroy network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.522645 env[1301]: time="2025-08-13T00:52:48.522600780Z" level=error msg="encountered an error cleaning up failed sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.522810 env[1301]: time="2025-08-13T00:52:48.522782196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b78684bd4-f847k,Uid:b1be0258-fed9-4a06-957e-38cfc4092975,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.523782 kubelet[2099]: E0813 00:52:48.523114 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.523782 kubelet[2099]: E0813 00:52:48.523197 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b78684bd4-f847k" Aug 13 00:52:48.523782 kubelet[2099]: E0813 00:52:48.523228 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b78684bd4-f847k" Aug 13 00:52:48.524017 kubelet[2099]: E0813 00:52:48.523285 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b78684bd4-f847k_calico-system(b1be0258-fed9-4a06-957e-38cfc4092975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b78684bd4-f847k_calico-system(b1be0258-fed9-4a06-957e-38cfc4092975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b78684bd4-f847k" podUID="b1be0258-fed9-4a06-957e-38cfc4092975" Aug 13 00:52:48.636355 kubelet[2099]: E0813 00:52:48.636301 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:48.638895 env[1301]: time="2025-08-13T00:52:48.638819850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5846w,Uid:5af231c3-7046-4023-9f1c-637c842bb333,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:48.678489 kubelet[2099]: E0813 00:52:48.675869 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:52:48.682261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2-shm.mount: Deactivated successfully. Aug 13 00:52:48.684354 env[1301]: time="2025-08-13T00:52:48.683647048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vhxc6,Uid:286d844b-f8f2-4cbc-961c-669a123d9626,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:48.796154 env[1301]: time="2025-08-13T00:52:48.796073810Z" level=error msg="Failed to destroy network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.800925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6-shm.mount: Deactivated successfully. Aug 13 00:52:48.803807 env[1301]: time="2025-08-13T00:52:48.803721769Z" level=error msg="encountered an error cleaning up failed sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.804018 env[1301]: time="2025-08-13T00:52:48.803983696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5846w,Uid:5af231c3-7046-4023-9f1c-637c842bb333,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.804425 kubelet[2099]: E0813 00:52:48.804360 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.804609 kubelet[2099]: E0813 00:52:48.804475 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5846w" Aug 13 00:52:48.804609 kubelet[2099]: E0813 00:52:48.804510 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5846w" Aug 13 00:52:48.804609 kubelet[2099]: E0813 00:52:48.804582 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5846w_kube-system(5af231c3-7046-4023-9f1c-637c842bb333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5846w_kube-system(5af231c3-7046-4023-9f1c-637c842bb333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5846w" podUID="5af231c3-7046-4023-9f1c-637c842bb333" Aug 13 00:52:48.834331 env[1301]: time="2025-08-13T00:52:48.834255080Z" level=error msg="Failed to destroy network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.837926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c-shm.mount: Deactivated successfully. Aug 13 00:52:48.842886 env[1301]: time="2025-08-13T00:52:48.842793095Z" level=error msg="encountered an error cleaning up failed sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.844862 env[1301]: time="2025-08-13T00:52:48.844793137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vhxc6,Uid:286d844b-f8f2-4cbc-961c-669a123d9626,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.845753 kubelet[2099]: E0813 00:52:48.845295 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:48.845753 kubelet[2099]: E0813 00:52:48.845386 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vhxc6" Aug 13 00:52:48.845753 kubelet[2099]: E0813 00:52:48.845428 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vhxc6" Aug 13 00:52:48.847420 kubelet[2099]: E0813 00:52:48.845494 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vhxc6_kube-system(286d844b-f8f2-4cbc-961c-669a123d9626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vhxc6_kube-system(286d844b-f8f2-4cbc-961c-669a123d9626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vhxc6" podUID="286d844b-f8f2-4cbc-961c-669a123d9626" Aug 13 00:52:49.057785 kubelet[2099]: E0813 00:52:49.057608 2099 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.057785 kubelet[2099]: E0813 00:52:49.057685 2099 projected.go:194] Error preparing data for projected volume kube-api-access-5r55l for pod calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.058053 kubelet[2099]: E0813 00:52:49.057809 2099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3eb358bf-331f-4984-a4e4-9d6d55f60ba0-kube-api-access-5r55l podName:3eb358bf-331f-4984-a4e4-9d6d55f60ba0 nodeName:}" failed. No retries permitted until 2025-08-13 00:52:49.55775405 +0000 UTC m=+33.720445435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5r55l" (UniqueName: "kubernetes.io/projected/3eb358bf-331f-4984-a4e4-9d6d55f60ba0-kube-api-access-5r55l") pod "calico-apiserver-6c77cb5bfc-pg6q9" (UID: "3eb358bf-331f-4984-a4e4-9d6d55f60ba0") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.058199 kubelet[2099]: E0813 00:52:49.058114 2099 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.058199 kubelet[2099]: E0813 00:52:49.058141 2099 projected.go:194] Error preparing data for projected volume kube-api-access-6gr9k for pod calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.058199 kubelet[2099]: E0813 00:52:49.058189 2099 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/167dde7c-6e36-48ea-bd63-42e66d6a64d2-kube-api-access-6gr9k podName:167dde7c-6e36-48ea-bd63-42e66d6a64d2 nodeName:}" failed. No retries permitted until 2025-08-13 00:52:49.558175778 +0000 UTC m=+33.720867147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6gr9k" (UniqueName: "kubernetes.io/projected/167dde7c-6e36-48ea-bd63-42e66d6a64d2-kube-api-access-6gr9k") pod "calico-apiserver-6c77cb5bfc-vlzdm" (UID: "167dde7c-6e36-48ea-bd63-42e66d6a64d2") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:52:49.205120 env[1301]: time="2025-08-13T00:52:49.205059913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8qkv,Uid:e65125c0-f7bb-420d-885a-928dd8165be9,Namespace:calico-system,Attempt:0,}" Aug 13 00:52:49.293056 env[1301]: time="2025-08-13T00:52:49.292989330Z" level=error msg="Failed to destroy network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.293820 env[1301]: time="2025-08-13T00:52:49.293762880Z" level=error msg="encountered an error cleaning up failed sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.294019 env[1301]: time="2025-08-13T00:52:49.293989460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8qkv,Uid:e65125c0-f7bb-420d-885a-928dd8165be9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.294367 kubelet[2099]: E0813 00:52:49.294315 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.294840 kubelet[2099]: E0813 00:52:49.294385 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:49.294840 kubelet[2099]: E0813 00:52:49.294414 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8qkv" Aug 13 00:52:49.294840 kubelet[2099]: E0813 00:52:49.294490 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8qkv_calico-system(e65125c0-f7bb-420d-885a-928dd8165be9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8qkv_calico-system(e65125c0-f7bb-420d-885a-928dd8165be9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:49.371208 kubelet[2099]: I0813 00:52:49.371160 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:52:49.372691 env[1301]: time="2025-08-13T00:52:49.372649497Z" level=info msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" Aug 13 00:52:49.374396 kubelet[2099]: I0813 00:52:49.374365 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:52:49.375003 env[1301]: time="2025-08-13T00:52:49.374956469Z" level=info msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" Aug 13 00:52:49.378637 kubelet[2099]: I0813 00:52:49.378572 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:52:49.382955 env[1301]: time="2025-08-13T00:52:49.382896680Z" level=info msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" Aug 13 00:52:49.389267 kubelet[2099]: I0813 00:52:49.388510 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:52:49.390088 env[1301]: time="2025-08-13T00:52:49.389468059Z" level=info msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" Aug 13 00:52:49.395430 kubelet[2099]: I0813 00:52:49.394588 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:52:49.398380 env[1301]: time="2025-08-13T00:52:49.398325654Z" level=info msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" Aug 13 00:52:49.457132 env[1301]: time="2025-08-13T00:52:49.457064013Z" level=error msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" failed" error="failed to destroy network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.457939 kubelet[2099]: E0813 00:52:49.457606 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:52:49.457939 kubelet[2099]: E0813 00:52:49.457704 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6"} Aug 13 00:52:49.457939 kubelet[2099]: E0813 00:52:49.457793 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5af231c3-7046-4023-9f1c-637c842bb333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:49.457939 kubelet[2099]: E0813 00:52:49.457870 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5af231c3-7046-4023-9f1c-637c842bb333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5846w" podUID="5af231c3-7046-4023-9f1c-637c842bb333" Aug 13 00:52:49.487823 env[1301]: time="2025-08-13T00:52:49.487742399Z" level=error msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" failed" error="failed to destroy network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.488823 kubelet[2099]: E0813 00:52:49.488351 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:52:49.488823 kubelet[2099]: E0813 00:52:49.488501 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b"} Aug 13 00:52:49.488823 kubelet[2099]: E0813 00:52:49.488691 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1be0258-fed9-4a06-957e-38cfc4092975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:49.488823 kubelet[2099]: E0813 00:52:49.488749 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1be0258-fed9-4a06-957e-38cfc4092975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b78684bd4-f847k" podUID="b1be0258-fed9-4a06-957e-38cfc4092975" Aug 13 00:52:49.494305 env[1301]: time="2025-08-13T00:52:49.494208425Z" level=error msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" failed" error="failed to destroy network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.495047 kubelet[2099]: E0813 00:52:49.494793 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:52:49.495047 kubelet[2099]: E0813 00:52:49.494894 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1"} Aug 13 00:52:49.495047 kubelet[2099]: E0813 00:52:49.494934 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e647cce1-5592-4786-90e1-64c87a11f433\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:49.495047 kubelet[2099]: E0813 00:52:49.494972 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e647cce1-5592-4786-90e1-64c87a11f433\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qzqlj" podUID="e647cce1-5592-4786-90e1-64c87a11f433" Aug 13 00:52:49.514778 env[1301]: time="2025-08-13T00:52:49.514706047Z" level=error msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" failed" error="failed to destroy network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.515394 kubelet[2099]: E0813 00:52:49.515210 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:52:49.515394 kubelet[2099]: E0813 00:52:49.515268 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627"} Aug 13 00:52:49.515394 kubelet[2099]: E0813 00:52:49.515314 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e65125c0-f7bb-420d-885a-928dd8165be9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:49.515394 kubelet[2099]: E0813 00:52:49.515343 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e65125c0-f7bb-420d-885a-928dd8165be9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8qkv" podUID="e65125c0-f7bb-420d-885a-928dd8165be9" Aug 13 00:52:49.530409 env[1301]: time="2025-08-13T00:52:49.530327973Z" level=error msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" failed" error="failed to destroy network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:49.531157 kubelet[2099]: E0813 00:52:49.530907 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:52:49.531157 kubelet[2099]: E0813 00:52:49.530982 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c"} Aug 13 00:52:49.531157 kubelet[2099]: E0813 00:52:49.531033 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"286d844b-f8f2-4cbc-961c-669a123d9626\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:49.531157 kubelet[2099]: E0813 00:52:49.531068 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"286d844b-f8f2-4cbc-961c-669a123d9626\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vhxc6" podUID="286d844b-f8f2-4cbc-961c-669a123d9626" Aug 13 00:52:49.874494 env[1301]: time="2025-08-13T00:52:49.874428744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-pg6q9,Uid:3eb358bf-331f-4984-a4e4-9d6d55f60ba0,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:52:49.895650 env[1301]: time="2025-08-13T00:52:49.895597844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-vlzdm,Uid:167dde7c-6e36-48ea-bd63-42e66d6a64d2,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:52:50.081977 env[1301]: time="2025-08-13T00:52:50.081876505Z" level=error msg="Failed to destroy network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.082469 env[1301]: time="2025-08-13T00:52:50.082396584Z" level=error msg="encountered an error cleaning up failed sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.082527 env[1301]: time="2025-08-13T00:52:50.082485225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-vlzdm,Uid:167dde7c-6e36-48ea-bd63-42e66d6a64d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.082761 kubelet[2099]: E0813 00:52:50.082720 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.082884 kubelet[2099]: E0813 00:52:50.082791 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" Aug 13 00:52:50.082884 kubelet[2099]: E0813 00:52:50.082820 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" Aug 13 00:52:50.082977 kubelet[2099]: E0813 00:52:50.082881 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c77cb5bfc-vlzdm_calico-apiserver(167dde7c-6e36-48ea-bd63-42e66d6a64d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c77cb5bfc-vlzdm_calico-apiserver(167dde7c-6e36-48ea-bd63-42e66d6a64d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" podUID="167dde7c-6e36-48ea-bd63-42e66d6a64d2" Aug 13 00:52:50.101956 env[1301]: time="2025-08-13T00:52:50.101857397Z" level=error msg="Failed to destroy network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.102906 env[1301]: time="2025-08-13T00:52:50.102811733Z" level=error msg="encountered an error cleaning up failed sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.103224 env[1301]: time="2025-08-13T00:52:50.103169476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-pg6q9,Uid:3eb358bf-331f-4984-a4e4-9d6d55f60ba0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.104727 kubelet[2099]: E0813 00:52:50.104616 2099 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.104727 kubelet[2099]: E0813 00:52:50.104721 2099 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" Aug 13 00:52:50.104921 kubelet[2099]: E0813 00:52:50.104746 2099 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" Aug 13 00:52:50.104921 kubelet[2099]: E0813 00:52:50.104799 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c77cb5bfc-pg6q9_calico-apiserver(3eb358bf-331f-4984-a4e4-9d6d55f60ba0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c77cb5bfc-pg6q9_calico-apiserver(3eb358bf-331f-4984-a4e4-9d6d55f60ba0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" podUID="3eb358bf-331f-4984-a4e4-9d6d55f60ba0" Aug 13 00:52:50.403195 kubelet[2099]: I0813 00:52:50.403119 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:52:50.405874 env[1301]: time="2025-08-13T00:52:50.405809045Z" level=info msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" Aug 13 00:52:50.414717 kubelet[2099]: I0813 00:52:50.414676 2099 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:52:50.417505 env[1301]: time="2025-08-13T00:52:50.416742759Z" level=info msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" Aug 13 00:52:50.523021 env[1301]: time="2025-08-13T00:52:50.522957584Z" level=error msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" failed" error="failed to destroy network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.523288 kubelet[2099]: E0813 00:52:50.523238 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:52:50.523416 kubelet[2099]: E0813 00:52:50.523300 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680"} Aug 13 00:52:50.523416 kubelet[2099]: E0813 00:52:50.523348 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"167dde7c-6e36-48ea-bd63-42e66d6a64d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:50.523416 kubelet[2099]: E0813 00:52:50.523377 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"167dde7c-6e36-48ea-bd63-42e66d6a64d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" podUID="167dde7c-6e36-48ea-bd63-42e66d6a64d2" Aug 13 00:52:50.524022 env[1301]: time="2025-08-13T00:52:50.523951489Z" level=error msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" failed" error="failed to destroy network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:52:50.524502 kubelet[2099]: E0813 00:52:50.524434 2099 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:52:50.524619 kubelet[2099]: E0813 00:52:50.524510 2099 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8"} Aug 13 00:52:50.524619 kubelet[2099]: E0813 00:52:50.524553 2099 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3eb358bf-331f-4984-a4e4-9d6d55f60ba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:52:50.524619 kubelet[2099]: E0813 00:52:50.524577 2099 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3eb358bf-331f-4984-a4e4-9d6d55f60ba0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" podUID="3eb358bf-331f-4984-a4e4-9d6d55f60ba0" Aug 13 00:52:50.674049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680-shm.mount: Deactivated successfully. Aug 13 00:52:50.674240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8-shm.mount: Deactivated successfully. Aug 13 00:52:58.786107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767363147.mount: Deactivated successfully. Aug 13 00:52:58.829079 env[1301]: time="2025-08-13T00:52:58.828993084Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:58.831033 env[1301]: time="2025-08-13T00:52:58.830987757Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:58.832852 env[1301]: time="2025-08-13T00:52:58.832809826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:58.834947 env[1301]: time="2025-08-13T00:52:58.834896682Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:52:58.835546 env[1301]: time="2025-08-13T00:52:58.835510163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:52:58.876278 env[1301]: time="2025-08-13T00:52:58.876222731Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:52:58.899055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946654140.mount: Deactivated successfully. Aug 13 00:52:58.902538 env[1301]: time="2025-08-13T00:52:58.902458362Z" level=info msg="CreateContainer within sandbox \"46179c985e6c20720ebd2704438ce51c9a10863e901e2146404695aee9687d11\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"22e24005876719f178c014506eac38206078b50817235964f516db1f32905e74\"" Aug 13 00:52:58.903951 env[1301]: time="2025-08-13T00:52:58.903891154Z" level=info msg="StartContainer for \"22e24005876719f178c014506eac38206078b50817235964f516db1f32905e74\"" Aug 13 00:52:59.001252 env[1301]: time="2025-08-13T00:52:59.001170405Z" level=info msg="StartContainer for \"22e24005876719f178c014506eac38206078b50817235964f516db1f32905e74\" returns successfully" Aug 13 00:52:59.391696 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:52:59.391970 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:52:59.779022 kubelet[2099]: I0813 00:52:59.776499 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rg7zr" podStartSLOduration=1.601126263 podStartE2EDuration="22.772934938s" podCreationTimestamp="2025-08-13 00:52:37 +0000 UTC" firstStartedPulling="2025-08-13 00:52:37.664992627 +0000 UTC m=+21.827683995" lastFinishedPulling="2025-08-13 00:52:58.83680129 +0000 UTC m=+42.999492670" observedRunningTime="2025-08-13 00:52:59.500029458 +0000 UTC m=+43.662720845" watchObservedRunningTime="2025-08-13 00:52:59.772934938 +0000 UTC m=+43.935626328" Aug 13 00:52:59.780662 env[1301]: time="2025-08-13T00:52:59.780608732Z" level=info msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" Aug 13 00:53:00.204851 env[1301]: time="2025-08-13T00:53:00.204788299Z" level=info msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" Aug 13 00:53:00.207178 env[1301]: time="2025-08-13T00:53:00.207117432Z" level=info msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.967 [INFO][3297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.969 [INFO][3297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" iface="eth0" netns="/var/run/netns/cni-828e5b9f-ea65-c6e2-ec44-bb5c16a248d8" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.970 [INFO][3297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" iface="eth0" netns="/var/run/netns/cni-828e5b9f-ea65-c6e2-ec44-bb5c16a248d8" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.971 [INFO][3297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" iface="eth0" netns="/var/run/netns/cni-828e5b9f-ea65-c6e2-ec44-bb5c16a248d8" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.971 [INFO][3297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:52:59.971 [INFO][3297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.249 [INFO][3305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.252 [INFO][3305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.252 [INFO][3305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.273 [WARNING][3305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.273 [INFO][3305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.283 [INFO][3305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:00.322468 env[1301]: 2025-08-13 00:53:00.316 [INFO][3297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:00.331229 systemd[1]: run-netns-cni\x2d828e5b9f\x2dea65\x2dc6e2\x2dec44\x2dbb5c16a248d8.mount: Deactivated successfully. Aug 13 00:53:00.333052 env[1301]: time="2025-08-13T00:53:00.332873461Z" level=info msg="TearDown network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" successfully" Aug 13 00:53:00.333052 env[1301]: time="2025-08-13T00:53:00.332944947Z" level=info msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" returns successfully" Aug 13 00:53:00.450392 kubelet[2099]: I0813 00:53:00.448766 2099 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-backend-key-pair\") pod \"b1be0258-fed9-4a06-957e-38cfc4092975\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " Aug 13 00:53:00.450392 kubelet[2099]: I0813 00:53:00.448850 2099 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n289\" (UniqueName: \"kubernetes.io/projected/b1be0258-fed9-4a06-957e-38cfc4092975-kube-api-access-4n289\") pod \"b1be0258-fed9-4a06-957e-38cfc4092975\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " Aug 13 00:53:00.450392 kubelet[2099]: I0813 00:53:00.448895 2099 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-ca-bundle\") pod \"b1be0258-fed9-4a06-957e-38cfc4092975\" (UID: \"b1be0258-fed9-4a06-957e-38cfc4092975\") " Aug 13 00:53:00.456551 kubelet[2099]: I0813 00:53:00.451019 2099 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b1be0258-fed9-4a06-957e-38cfc4092975" (UID: "b1be0258-fed9-4a06-957e-38cfc4092975"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:53:00.478816 systemd[1]: var-lib-kubelet-pods-b1be0258\x2dfed9\x2d4a06\x2d957e\x2d38cfc4092975-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4n289.mount: Deactivated successfully. Aug 13 00:53:00.491054 systemd[1]: var-lib-kubelet-pods-b1be0258\x2dfed9\x2d4a06\x2d957e\x2d38cfc4092975-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:53:00.497034 kubelet[2099]: I0813 00:53:00.496805 2099 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b1be0258-fed9-4a06-957e-38cfc4092975" (UID: "b1be0258-fed9-4a06-957e-38cfc4092975"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:53:00.497034 kubelet[2099]: I0813 00:53:00.496968 2099 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1be0258-fed9-4a06-957e-38cfc4092975-kube-api-access-4n289" (OuterVolumeSpecName: "kube-api-access-4n289") pod "b1be0258-fed9-4a06-957e-38cfc4092975" (UID: "b1be0258-fed9-4a06-957e-38cfc4092975"). InnerVolumeSpecName "kube-api-access-4n289". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:53:00.553466 kubelet[2099]: I0813 00:53:00.553349 2099 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-backend-key-pair\") on node \"ci-3510.3.8-8-adc8b0fbd5\" DevicePath \"\"" Aug 13 00:53:00.553466 kubelet[2099]: I0813 00:53:00.553389 2099 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n289\" (UniqueName: \"kubernetes.io/projected/b1be0258-fed9-4a06-957e-38cfc4092975-kube-api-access-4n289\") on node \"ci-3510.3.8-8-adc8b0fbd5\" DevicePath \"\"" Aug 13 00:53:00.553466 kubelet[2099]: I0813 00:53:00.553401 2099 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1be0258-fed9-4a06-957e-38cfc4092975-whisker-ca-bundle\") on node \"ci-3510.3.8-8-adc8b0fbd5\" DevicePath \"\"" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" iface="eth0" netns="/var/run/netns/cni-73f6829a-d873-8913-ef04-cee4773aaa97" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" iface="eth0" netns="/var/run/netns/cni-73f6829a-d873-8913-ef04-cee4773aaa97" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" iface="eth0" netns="/var/run/netns/cni-73f6829a-d873-8913-ef04-cee4773aaa97" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.398 [INFO][3326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.518 [INFO][3343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.524 [INFO][3343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.524 [INFO][3343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.533 [WARNING][3343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.533 [INFO][3343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.541 [INFO][3343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:00.561065 env[1301]: 2025-08-13 00:53:00.557 [INFO][3326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:00.563337 env[1301]: time="2025-08-13T00:53:00.561433490Z" level=info msg="TearDown network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" successfully" Aug 13 00:53:00.563337 env[1301]: time="2025-08-13T00:53:00.561537235Z" level=info msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" returns successfully" Aug 13 00:53:00.563919 env[1301]: time="2025-08-13T00:53:00.563824248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d798fdc4f-v55r9,Uid:ca17bcc4-9279-4936-bb62-b2a432984a63,Namespace:calico-system,Attempt:1,}" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.455 [INFO][3335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.455 [INFO][3335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" iface="eth0" netns="/var/run/netns/cni-11ae6f0c-4e3f-d135-cf8b-632d89785e8d" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.456 [INFO][3335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" iface="eth0" netns="/var/run/netns/cni-11ae6f0c-4e3f-d135-cf8b-632d89785e8d" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.463 [INFO][3335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" iface="eth0" netns="/var/run/netns/cni-11ae6f0c-4e3f-d135-cf8b-632d89785e8d" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.463 [INFO][3335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.463 [INFO][3335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.590 [INFO][3354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.595 [INFO][3354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.596 [INFO][3354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.605 [WARNING][3354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.605 [INFO][3354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.612 [INFO][3354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:00.619960 env[1301]: 2025-08-13 00:53:00.616 [INFO][3335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:00.619960 env[1301]: time="2025-08-13T00:53:00.619978297Z" level=info msg="TearDown network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" successfully" Aug 13 00:53:00.619960 env[1301]: time="2025-08-13T00:53:00.620018632Z" level=info msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" returns successfully" Aug 13 00:53:00.625750 kubelet[2099]: E0813 00:53:00.621390 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:00.625878 env[1301]: time="2025-08-13T00:53:00.625836185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5846w,Uid:5af231c3-7046-4023-9f1c-637c842bb333,Namespace:kube-system,Attempt:1,}" Aug 13 00:53:00.804464 systemd[1]: run-netns-cni\x2d11ae6f0c\x2d4e3f\x2dd135\x2dcf8b\x2d632d89785e8d.mount: Deactivated successfully. Aug 13 00:53:00.804711 systemd[1]: run-netns-cni\x2d73f6829a\x2dd873\x2d8913\x2def04\x2dcee4773aaa97.mount: Deactivated successfully. Aug 13 00:53:00.947116 systemd-networkd[1054]: cali5d5b436c8b3: Link UP Aug 13 00:53:00.952292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:53:00.952493 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5d5b436c8b3: link becomes ready Aug 13 00:53:00.952860 systemd-networkd[1054]: cali5d5b436c8b3: Gained carrier Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.647 [INFO][3374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.679 [INFO][3374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0 calico-kube-controllers-6d798fdc4f- calico-system ca17bcc4-9279-4936-bb62-b2a432984a63 908 0 2025-08-13 00:52:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d798fdc4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 calico-kube-controllers-6d798fdc4f-v55r9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d5b436c8b3 [] [] }} ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.679 [INFO][3374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.764 [INFO][3411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" HandleID="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.765 [INFO][3411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" HandleID="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"calico-kube-controllers-6d798fdc4f-v55r9", "timestamp":"2025-08-13 00:53:00.764941023 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.765 [INFO][3411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.766 [INFO][3411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.766 [INFO][3411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.782 [INFO][3411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.841 [INFO][3411] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.863 [INFO][3411] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.875 [INFO][3411] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.883 [INFO][3411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.883 [INFO][3411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.887 [INFO][3411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22 Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.899 [INFO][3411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.909 [INFO][3411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.193/26] block=192.168.36.192/26 handle="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.909 [INFO][3411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.193/26] handle="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.909 [INFO][3411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:01.042762 env[1301]: 2025-08-13 00:53:00.909 [INFO][3411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.193/26] IPv6=[] ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" HandleID="k8s-pod-network.ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:00.912 [INFO][3374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0", GenerateName:"calico-kube-controllers-6d798fdc4f-", Namespace:"calico-system", SelfLink:"", UID:"ca17bcc4-9279-4936-bb62-b2a432984a63", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d798fdc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"calico-kube-controllers-6d798fdc4f-v55r9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d5b436c8b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:00.912 [INFO][3374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.193/32] ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:00.912 [INFO][3374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d5b436c8b3 ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:00.974 [INFO][3374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:00.975 [INFO][3374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0", GenerateName:"calico-kube-controllers-6d798fdc4f-", Namespace:"calico-system", SelfLink:"", UID:"ca17bcc4-9279-4936-bb62-b2a432984a63", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d798fdc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22", Pod:"calico-kube-controllers-6d798fdc4f-v55r9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d5b436c8b3", MAC:"76:71:35:79:cc:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:01.044200 env[1301]: 2025-08-13 00:53:01.033 [INFO][3374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22" Namespace="calico-system" Pod="calico-kube-controllers-6d798fdc4f-v55r9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:01.097137 env[1301]: time="2025-08-13T00:53:01.096881470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:01.097137 env[1301]: time="2025-08-13T00:53:01.096962482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:01.098029 env[1301]: time="2025-08-13T00:53:01.097679690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:01.098460 env[1301]: time="2025-08-13T00:53:01.098329683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22 pid=3443 runtime=io.containerd.runc.v2 Aug 13 00:53:01.120077 systemd-networkd[1054]: cali400aa52c920: Link UP Aug 13 00:53:01.123502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali400aa52c920: link becomes ready Aug 13 00:53:01.123730 systemd-networkd[1054]: cali400aa52c920: Gained carrier Aug 13 00:53:01.168291 systemd[1]: run-containerd-runc-k8s.io-ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22-runc.axyAc7.mount: Deactivated successfully. Aug 13 00:53:01.171844 kubelet[2099]: I0813 00:53:01.171484 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed954de4-badc-4744-8e27-f3ad13af58c1-whisker-backend-key-pair\") pod \"whisker-78cf69b664-bbxlk\" (UID: \"ed954de4-badc-4744-8e27-f3ad13af58c1\") " pod="calico-system/whisker-78cf69b664-bbxlk" Aug 13 00:53:01.171844 kubelet[2099]: I0813 00:53:01.171557 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2mgr\" (UniqueName: \"kubernetes.io/projected/ed954de4-badc-4744-8e27-f3ad13af58c1-kube-api-access-b2mgr\") pod \"whisker-78cf69b664-bbxlk\" (UID: \"ed954de4-badc-4744-8e27-f3ad13af58c1\") " pod="calico-system/whisker-78cf69b664-bbxlk" Aug 13 00:53:01.171844 kubelet[2099]: I0813 00:53:01.171582 2099 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed954de4-badc-4744-8e27-f3ad13af58c1-whisker-ca-bundle\") pod \"whisker-78cf69b664-bbxlk\" (UID: \"ed954de4-badc-4744-8e27-f3ad13af58c1\") " pod="calico-system/whisker-78cf69b664-bbxlk" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.727 [INFO][3399] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.767 [INFO][3399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0 coredns-7c65d6cfc9- kube-system 5af231c3-7046-4023-9f1c-637c842bb333 909 0 2025-08-13 00:52:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 coredns-7c65d6cfc9-5846w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali400aa52c920 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.771 [INFO][3399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.897 [INFO][3423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" HandleID="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.897 [INFO][3423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" HandleID="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"coredns-7c65d6cfc9-5846w", "timestamp":"2025-08-13 00:53:00.897207813 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.897 [INFO][3423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.913 [INFO][3423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:00.913 [INFO][3423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.005 [INFO][3423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.041 [INFO][3423] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.057 [INFO][3423] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.061 [INFO][3423] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.065 [INFO][3423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.065 [INFO][3423] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.070 [INFO][3423] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762 Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.079 [INFO][3423] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.093 [INFO][3423] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.194/26] block=192.168.36.192/26 handle="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.093 [INFO][3423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.194/26] handle="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.093 [INFO][3423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:01.187348 env[1301]: 2025-08-13 00:53:01.093 [INFO][3423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.194/26] IPv6=[] ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" HandleID="k8s-pod-network.b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.097 [INFO][3399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5af231c3-7046-4023-9f1c-637c842bb333", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"coredns-7c65d6cfc9-5846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali400aa52c920", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.097 [INFO][3399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.194/32] ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.097 [INFO][3399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali400aa52c920 ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.153 [INFO][3399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.157 [INFO][3399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5af231c3-7046-4023-9f1c-637c842bb333", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762", Pod:"coredns-7c65d6cfc9-5846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali400aa52c920", MAC:"4a:04:e2:fb:b1:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:01.188963 env[1301]: 2025-08-13 00:53:01.177 [INFO][3399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5846w" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:01.204273 env[1301]: time="2025-08-13T00:53:01.202761876Z" level=info msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" Aug 13 00:53:01.222953 env[1301]: time="2025-08-13T00:53:01.222888682Z" level=info msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" Aug 13 00:53:01.230899 env[1301]: time="2025-08-13T00:53:01.230782212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:01.230899 env[1301]: time="2025-08-13T00:53:01.230853234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:01.230899 env[1301]: time="2025-08-13T00:53:01.230870494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:01.231739 env[1301]: time="2025-08-13T00:53:01.231603810Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762 pid=3488 runtime=io.containerd.runc.v2 Aug 13 00:53:01.330479 env[1301]: time="2025-08-13T00:53:01.327844735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cf69b664-bbxlk,Uid:ed954de4-badc-4744-8e27-f3ad13af58c1,Namespace:calico-system,Attempt:0,}" Aug 13 00:53:01.331054 kubelet[2099]: I0813 00:53:01.331008 2099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:53:01.331781 kubelet[2099]: E0813 00:53:01.331752 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:01.344574 env[1301]: time="2025-08-13T00:53:01.344526868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d798fdc4f-v55r9,Uid:ca17bcc4-9279-4936-bb62-b2a432984a63,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22\"" Aug 13 00:53:01.352195 env[1301]: time="2025-08-13T00:53:01.352079491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:53:01.610279 env[1301]: time="2025-08-13T00:53:01.610119012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5846w,Uid:5af231c3-7046-4023-9f1c-637c842bb333,Namespace:kube-system,Attempt:1,} returns sandbox id \"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762\"" Aug 13 00:53:01.613542 kubelet[2099]: E0813 00:53:01.612154 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:01.622645 env[1301]: time="2025-08-13T00:53:01.622593753Z" level=info msg="CreateContainer within sandbox \"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:53:01.633261 kubelet[2099]: E0813 00:53:01.633193 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:01.720181 env[1301]: time="2025-08-13T00:53:01.720032155Z" level=info msg="CreateContainer within sandbox \"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fb231d8a47e7317888d077f259854c19ce05ada4932aa6d8496e67a4e1c115a\"" Aug 13 00:53:01.722651 env[1301]: time="2025-08-13T00:53:01.722562323Z" level=info msg="StartContainer for \"5fb231d8a47e7317888d077f259854c19ce05ada4932aa6d8496e67a4e1c115a\"" Aug 13 00:53:01.811000 audit[3617]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:01.814382 kernel: kauditd_printk_skb: 8 callbacks suppressed Aug 13 00:53:01.814517 kernel: audit: type=1325 audit(1755046381.811:305): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:01.821166 kernel: audit: type=1300 audit(1755046381.811:305): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd2d0ae1b0 a2=0 a3=7ffd2d0ae19c items=0 ppid=2236 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.811000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd2d0ae1b0 a2=0 a3=7ffd2d0ae19c items=0 ppid=2236 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:01.832549 kernel: audit: type=1327 audit(1755046381.811:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:01.832690 kernel: audit: type=1325 audit(1755046381.825:306): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:01.825000 audit[3617]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:01.838762 kernel: audit: type=1400 audit(1755046381.831:307): avc: denied { write } for pid=3628 comm="tee" name="fd" dev="proc" ino=25286 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:01.831000 audit[3628]: AVC avc: denied { write } for pid=3628 comm="tee" name="fd" dev="proc" ino=25286 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:01.831000 audit[3628]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccfb2f7d0 a2=241 a3=1b6 items=1 ppid=3620 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.848575 kernel: audit: type=1300 audit(1755046381.831:307): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccfb2f7d0 a2=241 a3=1b6 items=1 ppid=3620 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.860863 kernel: audit: type=1307 audit(1755046381.831:307): cwd="/etc/service/enabled/felix/log" Aug 13 00:53:01.861119 kernel: audit: type=1302 audit(1755046381.831:307): item=0 name="/dev/fd/63" inode=25283 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:01.831000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 00:53:01.831000 audit: PATH item=0 name="/dev/fd/63" inode=25283 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:01.870643 kernel: audit: type=1327 audit(1755046381.831:307): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:01.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:01.825000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd2d0ae1b0 a2=0 a3=7ffd2d0ae19c items=0 ppid=2236 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.880624 kernel: audit: type=1300 audit(1755046381.825:306): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd2d0ae1b0 a2=0 a3=7ffd2d0ae19c items=0 ppid=2236 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:01.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:02.011000 audit[3673]: AVC avc: denied { write } for pid=3673 comm="tee" name="fd" dev="proc" ino=25345 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.011000 audit[3673]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc129c57d0 a2=241 a3=1b6 items=1 ppid=3610 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.031000 audit[3670]: AVC avc: denied { write } for pid=3670 comm="tee" name="fd" dev="proc" ino=25349 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.031000 audit[3670]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2d66a7c1 a2=241 a3=1b6 items=1 ppid=3611 pid=3670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.031000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 00:53:02.031000 audit: PATH item=0 name="/dev/fd/63" inode=25328 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.057053 systemd[1]: run-containerd-runc-k8s.io-5fb231d8a47e7317888d077f259854c19ce05ada4932aa6d8496e67a4e1c115a-runc.qVlwuC.mount: Deactivated successfully. Aug 13 00:53:02.011000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 00:53:02.011000 audit: PATH item=0 name="/dev/fd/63" inode=25733 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.073000 audit[3676]: AVC avc: denied { write } for pid=3676 comm="tee" name="fd" dev="proc" ino=25362 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.073000 audit[3676]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe66fe27c0 a2=241 a3=1b6 items=1 ppid=3608 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.073000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 00:53:02.073000 audit: PATH item=0 name="/dev/fd/63" inode=25734 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.075000 audit[3693]: AVC avc: denied { write } for pid=3693 comm="tee" name="fd" dev="proc" ino=25366 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.075000 audit[3693]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdc6ec07d1 a2=241 a3=1b6 items=1 ppid=3619 pid=3693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.075000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 00:53:02.075000 audit: PATH item=0 name="/dev/fd/63" inode=25353 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.075000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.098000 audit[3703]: AVC avc: denied { write } for pid=3703 comm="tee" name="fd" dev="proc" ino=25791 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.098000 audit[3703]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff335547d0 a2=241 a3=1b6 items=1 ppid=3618 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.098000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 00:53:02.098000 audit: PATH item=0 name="/dev/fd/63" inode=25787 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.125000 audit[3712]: AVC avc: denied { write } for pid=3712 comm="tee" name="fd" dev="proc" ino=25371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 00:53:02.125000 audit[3712]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdebcdd7d2 a2=241 a3=1b6 items=1 ppid=3643 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.125000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 00:53:02.125000 audit: PATH item=0 name="/dev/fd/63" inode=25817 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:02.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 00:53:02.215684 systemd-networkd[1054]: cali5d5b436c8b3: Gained IPv6LL Aug 13 00:53:02.227564 kubelet[2099]: I0813 00:53:02.227469 2099 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1be0258-fed9-4a06-957e-38cfc4092975" path="/var/lib/kubelet/pods/b1be0258-fed9-4a06-957e-38cfc4092975/volumes" Aug 13 00:53:02.278270 env[1301]: time="2025-08-13T00:53:02.278181549Z" level=info msg="StartContainer for \"5fb231d8a47e7317888d077f259854c19ce05ada4932aa6d8496e67a4e1c115a\" returns successfully" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.673 [INFO][3509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.673 [INFO][3509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" iface="eth0" netns="/var/run/netns/cni-87dd1f4d-6651-45ed-07f4-f14e55e46be7" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.676 [INFO][3509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" iface="eth0" netns="/var/run/netns/cni-87dd1f4d-6651-45ed-07f4-f14e55e46be7" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.677 [INFO][3509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" iface="eth0" netns="/var/run/netns/cni-87dd1f4d-6651-45ed-07f4-f14e55e46be7" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.677 [INFO][3509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:01.677 [INFO][3509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.253 [INFO][3590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.254 [INFO][3590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.261 [INFO][3590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.293 [WARNING][3590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.293 [INFO][3590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.297 [INFO][3590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:02.309718 env[1301]: 2025-08-13 00:53:02.304 [INFO][3509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:02.322783 systemd[1]: run-netns-cni\x2d87dd1f4d\x2d6651\x2d45ed\x2d07f4\x2df14e55e46be7.mount: Deactivated successfully. Aug 13 00:53:02.325799 env[1301]: time="2025-08-13T00:53:02.325723768Z" level=info msg="TearDown network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" successfully" Aug 13 00:53:02.328718 env[1301]: time="2025-08-13T00:53:02.328641478Z" level=info msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" returns successfully" Aug 13 00:53:02.331980 kubelet[2099]: E0813 00:53:02.331546 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:02.332639 env[1301]: time="2025-08-13T00:53:02.332576377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vhxc6,Uid:286d844b-f8f2-4cbc-961c-669a123d9626,Namespace:kube-system,Attempt:1,}" Aug 13 00:53:02.344766 systemd-networkd[1054]: cali400aa52c920: Gained IPv6LL Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.768 [INFO][3525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.770 [INFO][3525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" iface="eth0" netns="/var/run/netns/cni-41c5f9f0-a876-eb52-308d-9dfe8f81dac0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.770 [INFO][3525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" iface="eth0" netns="/var/run/netns/cni-41c5f9f0-a876-eb52-308d-9dfe8f81dac0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.770 [INFO][3525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" iface="eth0" netns="/var/run/netns/cni-41c5f9f0-a876-eb52-308d-9dfe8f81dac0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.770 [INFO][3525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:01.775 [INFO][3525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.287 [INFO][3616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.310 [INFO][3616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.313 [INFO][3616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.354 [WARNING][3616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.354 [INFO][3616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.363 [INFO][3616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:02.385650 env[1301]: 2025-08-13 00:53:02.372 [INFO][3525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:02.388729 env[1301]: time="2025-08-13T00:53:02.388652032Z" level=info msg="TearDown network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" successfully" Aug 13 00:53:02.389060 env[1301]: time="2025-08-13T00:53:02.388970271Z" level=info msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" returns successfully" Aug 13 00:53:02.390812 env[1301]: time="2025-08-13T00:53:02.390753144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-vlzdm,Uid:167dde7c-6e36-48ea-bd63-42e66d6a64d2,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:53:02.516284 kubelet[2099]: E0813 00:53:02.515072 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:02.552488 kubelet[2099]: I0813 00:53:02.549210 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5846w" podStartSLOduration=42.549183763 podStartE2EDuration="42.549183763s" podCreationTimestamp="2025-08-13 00:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:02.548814174 +0000 UTC m=+46.711505562" watchObservedRunningTime="2025-08-13 00:53:02.549183763 +0000 UTC m=+46.711875153" Aug 13 00:53:02.559599 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:53:02.559826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie81d628e25e: link becomes ready Aug 13 00:53:02.560098 systemd-networkd[1054]: calie81d628e25e: Link UP Aug 13 00:53:02.564165 systemd-networkd[1054]: calie81d628e25e: Gained carrier Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:01.808 [INFO][3545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:01.946 [INFO][3545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0 whisker-78cf69b664- calico-system ed954de4-badc-4744-8e27-f3ad13af58c1 926 0 2025-08-13 00:53:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78cf69b664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 whisker-78cf69b664-bbxlk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie81d628e25e [] [] }} ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:01.948 [INFO][3545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.405 [INFO][3663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" HandleID="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.407 [INFO][3663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" HandleID="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003794f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"whisker-78cf69b664-bbxlk", "timestamp":"2025-08-13 00:53:02.405631057 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.407 [INFO][3663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.407 [INFO][3663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.407 [INFO][3663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.422 [INFO][3663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.436 [INFO][3663] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.450 [INFO][3663] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.459 [INFO][3663] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.470 [INFO][3663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.470 [INFO][3663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.474 [INFO][3663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8 Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.492 [INFO][3663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.523 [INFO][3663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.195/26] block=192.168.36.192/26 handle="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.523 [INFO][3663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.195/26] handle="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.523 [INFO][3663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:02.620692 env[1301]: 2025-08-13 00:53:02.523 [INFO][3663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.195/26] IPv6=[] ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" HandleID="k8s-pod-network.078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.540 [INFO][3545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0", GenerateName:"whisker-78cf69b664-", Namespace:"calico-system", SelfLink:"", UID:"ed954de4-badc-4744-8e27-f3ad13af58c1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cf69b664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"whisker-78cf69b664-bbxlk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie81d628e25e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.541 [INFO][3545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.195/32] ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.541 [INFO][3545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie81d628e25e ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.563 [INFO][3545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.579 [INFO][3545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0", GenerateName:"whisker-78cf69b664-", Namespace:"calico-system", SelfLink:"", UID:"ed954de4-badc-4744-8e27-f3ad13af58c1", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cf69b664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8", Pod:"whisker-78cf69b664-bbxlk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie81d628e25e", MAC:"06:03:19:7e:fc:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:02.622208 env[1301]: 2025-08-13 00:53:02.599 [INFO][3545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8" Namespace="calico-system" Pod="whisker-78cf69b664-bbxlk" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--78cf69b664--bbxlk-eth0" Aug 13 00:53:02.695000 audit[3764]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:02.695000 audit[3764]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeb2e18500 a2=0 a3=7ffeb2e184ec items=0 ppid=2236 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:02.708000 audit[3764]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=3764 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:02.708000 audit[3764]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeb2e18500 a2=0 a3=0 items=0 ppid=2236 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:02.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:02.773161 env[1301]: time="2025-08-13T00:53:02.771733632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:02.773161 env[1301]: time="2025-08-13T00:53:02.771807326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:02.773161 env[1301]: time="2025-08-13T00:53:02.771838798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:02.773161 env[1301]: time="2025-08-13T00:53:02.772054223Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8 pid=3771 runtime=io.containerd.runc.v2 Aug 13 00:53:02.790491 systemd[1]: run-netns-cni\x2d41c5f9f0\x2da876\x2deb52\x2d308d\x2d9dfe8f81dac0.mount: Deactivated successfully. Aug 13 00:53:02.975143 systemd[1]: run-containerd-runc-k8s.io-078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8-runc.lm4zEn.mount: Deactivated successfully. Aug 13 00:53:03.209621 env[1301]: time="2025-08-13T00:53:03.209551904Z" level=info msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" Aug 13 00:53:03.332139 systemd-networkd[1054]: cali42e6eeb4c73: Link UP Aug 13 00:53:03.349051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali42e6eeb4c73: link becomes ready Aug 13 00:53:03.348404 systemd-networkd[1054]: cali42e6eeb4c73: Gained carrier Aug 13 00:53:03.388474 env[1301]: time="2025-08-13T00:53:03.385803748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cf69b664-bbxlk,Uid:ed954de4-badc-4744-8e27-f3ad13af58c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8\"" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:02.797 [INFO][3738] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:02.832 [INFO][3738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0 coredns-7c65d6cfc9- kube-system 286d844b-f8f2-4cbc-961c-669a123d9626 941 0 2025-08-13 00:52:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 coredns-7c65d6cfc9-vhxc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42e6eeb4c73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:02.833 [INFO][3738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.133 [INFO][3794] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" HandleID="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.134 [INFO][3794] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" HandleID="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"coredns-7c65d6cfc9-vhxc6", "timestamp":"2025-08-13 00:53:03.133263203 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.134 [INFO][3794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.134 [INFO][3794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.134 [INFO][3794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.155 [INFO][3794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.176 [INFO][3794] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.210 [INFO][3794] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.228 [INFO][3794] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.235 [INFO][3794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.235 [INFO][3794] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.239 [INFO][3794] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.249 [INFO][3794] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.291 [INFO][3794] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.196/26] block=192.168.36.192/26 handle="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.292 [INFO][3794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.196/26] handle="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.292 [INFO][3794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:03.425271 env[1301]: 2025-08-13 00:53:03.293 [INFO][3794] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.196/26] IPv6=[] ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" HandleID="k8s-pod-network.b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.315 [INFO][3738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"286d844b-f8f2-4cbc-961c-669a123d9626", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"coredns-7c65d6cfc9-vhxc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42e6eeb4c73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.316 [INFO][3738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.196/32] ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.316 [INFO][3738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42e6eeb4c73 ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.357 [INFO][3738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.377 [INFO][3738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"286d844b-f8f2-4cbc-961c-669a123d9626", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc", Pod:"coredns-7c65d6cfc9-vhxc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42e6eeb4c73", MAC:"f6:dc:21:4b:69:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:03.426748 env[1301]: 2025-08-13 00:53:03.421 [INFO][3738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vhxc6" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:03.524859 systemd-networkd[1054]: califcb42a18103: Link UP Aug 13 00:53:03.532509 kubelet[2099]: E0813 00:53:03.531541 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:03.542996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califcb42a18103: link becomes ready Aug 13 00:53:03.537791 systemd-networkd[1054]: califcb42a18103: Gained carrier Aug 13 00:53:03.579367 env[1301]: time="2025-08-13T00:53:03.571018323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:03.579367 env[1301]: time="2025-08-13T00:53:03.571106686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:03.579367 env[1301]: time="2025-08-13T00:53:03.571123775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:03.597871 env[1301]: time="2025-08-13T00:53:03.580076546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc pid=3877 runtime=io.containerd.runc.v2 Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:02.856 [INFO][3747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:02.896 [INFO][3747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0 calico-apiserver-6c77cb5bfc- calico-apiserver 167dde7c-6e36-48ea-bd63-42e66d6a64d2 949 0 2025-08-13 00:52:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c77cb5bfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 calico-apiserver-6c77cb5bfc-vlzdm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcb42a18103 [] [] }} ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:02.897 [INFO][3747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.275 [INFO][3804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" HandleID="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.276 [INFO][3804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" HandleID="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033c110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"calico-apiserver-6c77cb5bfc-vlzdm", "timestamp":"2025-08-13 00:53:03.275302572 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.276 [INFO][3804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.292 [INFO][3804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.293 [INFO][3804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.309 [INFO][3804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.373 [INFO][3804] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.393 [INFO][3804] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.419 [INFO][3804] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.430 [INFO][3804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.430 [INFO][3804] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.436 [INFO][3804] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.447 [INFO][3804] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.467 [INFO][3804] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.197/26] block=192.168.36.192/26 handle="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.467 [INFO][3804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.197/26] handle="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.467 [INFO][3804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:03.624995 env[1301]: 2025-08-13 00:53:03.467 [INFO][3804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.197/26] IPv6=[] ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" HandleID="k8s-pod-network.ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.491 [INFO][3747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"167dde7c-6e36-48ea-bd63-42e66d6a64d2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"calico-apiserver-6c77cb5bfc-vlzdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb42a18103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.492 [INFO][3747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.197/32] ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.492 [INFO][3747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcb42a18103 ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.556 [INFO][3747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.557 [INFO][3747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"167dde7c-6e36-48ea-bd63-42e66d6a64d2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d", Pod:"calico-apiserver-6c77cb5bfc-vlzdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb42a18103", MAC:"9a:8a:0e:a9:05:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:03.626226 env[1301]: 2025-08-13 00:53:03.609 [INFO][3747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-vlzdm" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:03.664000 audit[3902]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:03.664000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd0ea6040 a2=0 a3=7fffd0ea602c items=0 ppid=2236 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:03.676000 audit[3902]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:03.676000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffd0ea6040 a2=0 a3=7fffd0ea602c items=0 ppid=2236 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:03.687708 systemd-networkd[1054]: calie81d628e25e: Gained IPv6LL Aug 13 00:53:03.786655 systemd[1]: run-containerd-runc-k8s.io-b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc-runc.BjvIjb.mount: Deactivated successfully. Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.841000 audit: BPF prog-id=10 op=LOAD Aug 13 00:53:03.841000 audit[3939]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe98906ce0 a2=98 a3=1fffffffffffffff items=0 ppid=3631 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.841000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:53:03.842000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit: BPF prog-id=11 op=LOAD Aug 13 00:53:03.843000 audit[3939]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe98906bc0 a2=94 a3=3 items=0 ppid=3631 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:53:03.843000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { bpf } for pid=3939 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit: BPF prog-id=12 op=LOAD Aug 13 00:53:03.843000 audit[3939]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe98906c00 a2=94 a3=7ffe98906de0 items=0 ppid=3631 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:53:03.843000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:53:03.843000 audit[3939]: AVC avc: denied { perfmon } for pid=3939 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.843000 audit[3939]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe98906cd0 a2=50 a3=a000000085 items=0 ppid=3631 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.853000 audit: BPF prog-id=13 op=LOAD Aug 13 00:53:03.853000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc53b3b6c0 a2=98 a3=3 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:03.853000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit: BPF prog-id=14 op=LOAD Aug 13 00:53:03.867000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc53b3b4b0 a2=94 a3=54428f items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:03.867000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:03.867000 audit: BPF prog-id=15 op=LOAD Aug 13 00:53:03.867000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc53b3b4e0 a2=94 a3=2 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:03.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:03.867000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:53:03.869989 env[1301]: time="2025-08-13T00:53:03.854818301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:03.869989 env[1301]: time="2025-08-13T00:53:03.854901168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:03.869989 env[1301]: time="2025-08-13T00:53:03.854918812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:03.869989 env[1301]: time="2025-08-13T00:53:03.855246385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d pid=3927 runtime=io.containerd.runc.v2 Aug 13 00:53:03.988553 env[1301]: time="2025-08-13T00:53:03.988417182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vhxc6,Uid:286d844b-f8f2-4cbc-961c-669a123d9626,Namespace:kube-system,Attempt:1,} returns sandbox id \"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc\"" Aug 13 00:53:03.989728 kubelet[2099]: E0813 00:53:03.989686 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:04.008295 env[1301]: time="2025-08-13T00:53:04.008239425Z" level=info msg="CreateContainer within sandbox \"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:53:04.062699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723766090.mount: Deactivated successfully. Aug 13 00:53:04.097076 env[1301]: time="2025-08-13T00:53:04.096985734Z" level=info msg="CreateContainer within sandbox \"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82c410de15dbd6f92135e29de8e88e962b38431975477899d0e30360798b8c4a\"" Aug 13 00:53:04.111452 env[1301]: time="2025-08-13T00:53:04.106397295Z" level=info msg="StartContainer for \"82c410de15dbd6f92135e29de8e88e962b38431975477899d0e30360798b8c4a\"" Aug 13 00:53:04.207536 env[1301]: time="2025-08-13T00:53:04.206202963Z" level=info msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" Aug 13 00:53:04.215332 env[1301]: time="2025-08-13T00:53:04.210549476Z" level=info msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.941 [INFO][3843] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.944 [INFO][3843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" iface="eth0" netns="/var/run/netns/cni-c0ba4188-c448-9149-d957-c989bf306a76" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.949 [INFO][3843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" iface="eth0" netns="/var/run/netns/cni-c0ba4188-c448-9149-d957-c989bf306a76" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.949 [INFO][3843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" iface="eth0" netns="/var/run/netns/cni-c0ba4188-c448-9149-d957-c989bf306a76" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.949 [INFO][3843] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:03.949 [INFO][3843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.173 [INFO][3958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.176 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.177 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.188 [WARNING][3958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.188 [INFO][3958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.192 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:04.258039 env[1301]: 2025-08-13 00:53:04.238 [INFO][3843] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:04.260397 env[1301]: time="2025-08-13T00:53:04.258383886Z" level=info msg="TearDown network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" successfully" Aug 13 00:53:04.260397 env[1301]: time="2025-08-13T00:53:04.258468506Z" level=info msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" returns successfully" Aug 13 00:53:04.260397 env[1301]: time="2025-08-13T00:53:04.260185852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qzqlj,Uid:e647cce1-5592-4786-90e1-64c87a11f433,Namespace:calico-system,Attempt:1,}" Aug 13 00:53:04.385562 env[1301]: time="2025-08-13T00:53:04.384660874Z" level=info msg="StartContainer for \"82c410de15dbd6f92135e29de8e88e962b38431975477899d0e30360798b8c4a\" returns successfully" Aug 13 00:53:04.595993 kubelet[2099]: E0813 00:53:04.595945 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:04.598614 kubelet[2099]: E0813 00:53:04.596706 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:04.668932 kubelet[2099]: I0813 00:53:04.662033 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vhxc6" podStartSLOduration=44.662007461 podStartE2EDuration="44.662007461s" podCreationTimestamp="2025-08-13 00:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:04.661343112 +0000 UTC m=+48.824034517" watchObservedRunningTime="2025-08-13 00:53:04.662007461 +0000 UTC m=+48.824698849" Aug 13 00:53:04.745521 env[1301]: time="2025-08-13T00:53:04.745408655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-vlzdm,Uid:167dde7c-6e36-48ea-bd63-42e66d6a64d2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d\"" Aug 13 00:53:04.746000 audit[4068]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:04.746000 audit[4068]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff3161c080 a2=0 a3=7fff3161c06c items=0 ppid=2236 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:04.767000 audit[4068]: NETFILTER_CFG table=nat:106 family=2 entries=44 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:04.767000 audit[4068]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff3161c080 a2=0 a3=7fff3161c06c items=0 ppid=2236 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:04.798003 systemd[1]: run-netns-cni\x2dc0ba4188\x2dc448\x2d9149\x2dd957\x2dc989bf306a76.mount: Deactivated successfully. Aug 13 00:53:04.817000 audit[4076]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:04.817000 audit[4076]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe5ced9c90 a2=0 a3=7ffe5ced9c7c items=0 ppid=2236 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:04.867000 audit[4076]: NETFILTER_CFG table=nat:108 family=2 entries=56 op=nft_register_chain pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:04.867000 audit[4076]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe5ced9c90 a2=0 a3=7ffe5ced9c7c items=0 ppid=2236 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit: BPF prog-id=16 op=LOAD Aug 13 00:53:04.898000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc53b3b3a0 a2=94 a3=1 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.898000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:53:04.898000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.898000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc53b3b470 a2=50 a3=7ffc53b3b550 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.924000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.924000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b3b0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.924000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc53b3b3e0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc53b3b2f0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b400 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b3e0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b3d0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b400 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc53b3b3e0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc53b3b400 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc53b3b3d0 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.925000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.925000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc53b3b440 a2=28 a3=0 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc53b3b1f0 a2=50 a3=1 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit: BPF prog-id=17 op=LOAD Aug 13 00:53:04.927000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc53b3b1f0 a2=94 a3=5 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.927000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc53b3b2a0 a2=50 a3=1 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc53b3b3c0 a2=4 a3=38 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.927000 audit[3941]: AVC avc: denied { confidentiality } for pid=3941 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:04.927000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc53b3b410 a2=94 a3=6 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { confidentiality } for pid=3941 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:04.928000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc53b3abc0 a2=94 a3=88 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { perfmon } for pid=3941 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { bpf } for pid=3941 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.928000 audit[3941]: AVC avc: denied { confidentiality } for pid=3941 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:04.928000 audit[3941]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc53b3abc0 a2=94 a3=88 items=0 ppid=3631 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.548 [INFO][4028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.581 [INFO][4028] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" iface="eth0" netns="/var/run/netns/cni-76b7a747-1e60-6fff-d79c-e4476cda14c6" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.582 [INFO][4028] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" iface="eth0" netns="/var/run/netns/cni-76b7a747-1e60-6fff-d79c-e4476cda14c6" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.583 [INFO][4028] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" iface="eth0" netns="/var/run/netns/cni-76b7a747-1e60-6fff-d79c-e4476cda14c6" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.583 [INFO][4028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.583 [INFO][4028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.866 [INFO][4054] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.867 [INFO][4054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.867 [INFO][4054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.898 [WARNING][4054] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.898 [INFO][4054] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.902 [INFO][4054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:04.937699 env[1301]: 2025-08-13 00:53:04.920 [INFO][4028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:04.943651 systemd[1]: run-netns-cni\x2d76b7a747\x2d1e60\x2d6fff\x2dd79c\x2de4476cda14c6.mount: Deactivated successfully. Aug 13 00:53:04.953301 env[1301]: time="2025-08-13T00:53:04.953225274Z" level=info msg="TearDown network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" successfully" Aug 13 00:53:04.953606 env[1301]: time="2025-08-13T00:53:04.953571012Z" level=info msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" returns successfully" Aug 13 00:53:04.955417 env[1301]: time="2025-08-13T00:53:04.955315450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8qkv,Uid:e65125c0-f7bb-420d-885a-928dd8165be9,Namespace:calico-system,Attempt:1,}" Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit: BPF prog-id=18 op=LOAD Aug 13 00:53:04.966000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1fabacd0 a2=98 a3=1999999999999999 items=0 ppid=3631 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.966000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:53:04.966000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit: BPF prog-id=19 op=LOAD Aug 13 00:53:04.966000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1fababb0 a2=94 a3=ffff items=0 ppid=3631 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.966000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:53:04.966000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { perfmon } for pid=4087 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit[4087]: AVC avc: denied { bpf } for pid=4087 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:04.966000 audit: BPF prog-id=20 op=LOAD Aug 13 00:53:04.966000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1fababf0 a2=94 a3=7ffd1fabadd0 items=0 ppid=3631 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:04.966000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 00:53:04.966000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:53:05.165292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:53:05.165487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7136af6eada: link becomes ready Aug 13 00:53:05.159039 systemd-networkd[1054]: cali7136af6eada: Link UP Aug 13 00:53:05.159583 systemd-networkd[1054]: cali7136af6eada: Gained carrier Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.810 [INFO][4035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.814 [INFO][4035] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" iface="eth0" netns="/var/run/netns/cni-97e6b117-4c24-329b-cc63-4fce5f93cebb" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.818 [INFO][4035] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" iface="eth0" netns="/var/run/netns/cni-97e6b117-4c24-329b-cc63-4fce5f93cebb" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.818 [INFO][4035] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" iface="eth0" netns="/var/run/netns/cni-97e6b117-4c24-329b-cc63-4fce5f93cebb" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.819 [INFO][4035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:04.819 [INFO][4035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.072 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.073 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.112 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.151 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.163 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.170 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:05.194499 env[1301]: 2025-08-13 00:53:05.178 [INFO][4035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:05.202767 systemd[1]: run-netns-cni\x2d97e6b117\x2d4c24\x2d329b\x2dcc63\x2d4fce5f93cebb.mount: Deactivated successfully. Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:04.716 [INFO][4009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0 goldmane-58fd7646b9- calico-system e647cce1-5592-4786-90e1-64c87a11f433 980 0 2025-08-13 00:52:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 goldmane-58fd7646b9-qzqlj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7136af6eada [] [] }} ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:04.720 [INFO][4009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.015 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" HandleID="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.016 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" HandleID="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"goldmane-58fd7646b9-qzqlj", "timestamp":"2025-08-13 00:53:05.015484156 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.016 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.016 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.016 [INFO][4070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.036 [INFO][4070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.057 [INFO][4070] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.076 [INFO][4070] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.080 [INFO][4070] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.087 [INFO][4070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.087 [INFO][4070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.091 [INFO][4070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9 Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.099 [INFO][4070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.111 [INFO][4070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.198/26] block=192.168.36.192/26 handle="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.111 [INFO][4070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.198/26] handle="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.127 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:05.206434 env[1301]: 2025-08-13 00:53:05.127 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.198/26] IPv6=[] ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" HandleID="k8s-pod-network.076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.135 [INFO][4009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e647cce1-5592-4786-90e1-64c87a11f433", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"goldmane-58fd7646b9-qzqlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7136af6eada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.136 [INFO][4009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.198/32] ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.136 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7136af6eada ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.154 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.154 [INFO][4009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e647cce1-5592-4786-90e1-64c87a11f433", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9", Pod:"goldmane-58fd7646b9-qzqlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7136af6eada", MAC:"26:58:93:d5:2e:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:05.208677 env[1301]: 2025-08-13 00:53:05.176 [INFO][4009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9" Namespace="calico-system" Pod="goldmane-58fd7646b9-qzqlj" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:05.214948 env[1301]: time="2025-08-13T00:53:05.214886485Z" level=info msg="TearDown network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" successfully" Aug 13 00:53:05.216144 env[1301]: time="2025-08-13T00:53:05.216043536Z" level=info msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" returns successfully" Aug 13 00:53:05.216936 env[1301]: time="2025-08-13T00:53:05.216877645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-pg6q9,Uid:3eb358bf-331f-4984-a4e4-9d6d55f60ba0,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:53:05.251795 systemd-networkd[1054]: vxlan.calico: Link UP Aug 13 00:53:05.251805 systemd-networkd[1054]: vxlan.calico: Gained carrier Aug 13 00:53:05.352003 systemd-networkd[1054]: cali42e6eeb4c73: Gained IPv6LL Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.365000 audit: BPF prog-id=21 op=LOAD Aug 13 00:53:05.365000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b6e1c90 a2=98 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.365000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.366000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit: BPF prog-id=22 op=LOAD Aug 13 00:53:05.367000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b6e1aa0 a2=94 a3=54428f items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.367000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.367000 audit: BPF prog-id=23 op=LOAD Aug 13 00:53:05.367000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9b6e1ad0 a2=94 a3=2 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.367000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e19a0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9b6e19d0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9b6e18e0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e19f0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e19d0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e19c0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e19f0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9b6e19d0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9b6e19f0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9b6e19c0 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffd9b6e1a30 a2=28 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.368000 audit: BPF prog-id=24 op=LOAD Aug 13 00:53:05.368000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9b6e18a0 a2=94 a3=0 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.368000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.368000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffd9b6e1890 a2=50 a3=2800 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffd9b6e1890 a2=50 a3=2800 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit: BPF prog-id=25 op=LOAD Aug 13 00:53:05.370000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9b6e10b0 a2=94 a3=2 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.370000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { perfmon } for pid=4151 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit[4151]: AVC avc: denied { bpf } for pid=4151 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.370000 audit: BPF prog-id=26 op=LOAD Aug 13 00:53:05.370000 audit[4151]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9b6e11b0 a2=94 a3=30 items=0 ppid=3631 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit: BPF prog-id=27 op=LOAD Aug 13 00:53:05.384000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7a550ff0 a2=98 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:05.384000 audit: BPF prog-id=27 op=UNLOAD Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.384000 audit: BPF prog-id=28 op=LOAD Aug 13 00:53:05.384000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7a550de0 a2=94 a3=54428f items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:05.387000 audit: BPF prog-id=28 op=UNLOAD Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:05.387000 audit: BPF prog-id=29 op=LOAD Aug 13 00:53:05.387000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7a550e10 a2=94 a3=2 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:05.387000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:05.387000 audit: BPF prog-id=29 op=UNLOAD Aug 13 00:53:05.422472 env[1301]: time="2025-08-13T00:53:05.404871990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:05.422472 env[1301]: time="2025-08-13T00:53:05.405010589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:05.422472 env[1301]: time="2025-08-13T00:53:05.405040113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:05.422472 env[1301]: time="2025-08-13T00:53:05.409782294Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9 pid=4156 runtime=io.containerd.runc.v2 Aug 13 00:53:05.544636 systemd-networkd[1054]: califcb42a18103: Gained IPv6LL Aug 13 00:53:05.626492 kubelet[2099]: E0813 00:53:05.625564 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:05.626492 kubelet[2099]: E0813 00:53:05.626233 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:05.905599 systemd-networkd[1054]: calif28cb6ef70c: Link UP Aug 13 00:53:05.916185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif28cb6ef70c: link becomes ready Aug 13 00:53:05.914762 systemd-networkd[1054]: calif28cb6ef70c: Gained carrier Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.446 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0 csi-node-driver- calico-system e65125c0-f7bb-420d-885a-928dd8165be9 989 0 2025-08-13 00:52:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 csi-node-driver-n8qkv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif28cb6ef70c [] [] }} ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.446 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.716 [INFO][4169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" HandleID="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.716 [INFO][4169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" HandleID="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000400130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"csi-node-driver-n8qkv", "timestamp":"2025-08-13 00:53:05.71612036 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.744 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.744 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.744 [INFO][4169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.766 [INFO][4169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.803 [INFO][4169] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.824 [INFO][4169] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.835 [INFO][4169] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.844 [INFO][4169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.844 [INFO][4169] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.849 [INFO][4169] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7 Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.865 [INFO][4169] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.879 [INFO][4169] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.199/26] block=192.168.36.192/26 handle="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.879 [INFO][4169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.199/26] handle="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.879 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:05.983858 env[1301]: 2025-08-13 00:53:05.879 [INFO][4169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.199/26] IPv6=[] ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" HandleID="k8s-pod-network.389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.889 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e65125c0-f7bb-420d-885a-928dd8165be9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"csi-node-driver-n8qkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif28cb6ef70c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.891 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.199/32] ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.891 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif28cb6ef70c ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.937 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.941 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e65125c0-f7bb-420d-885a-928dd8165be9", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7", Pod:"csi-node-driver-n8qkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif28cb6ef70c", MAC:"62:1e:eb:04:09:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:05.985750 env[1301]: 2025-08-13 00:53:05.969 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7" Namespace="calico-system" Pod="csi-node-driver-n8qkv" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:06.004268 env[1301]: time="2025-08-13T00:53:06.004184629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qzqlj,Uid:e647cce1-5592-4786-90e1-64c87a11f433,Namespace:calico-system,Attempt:1,} returns sandbox id \"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9\"" Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.007000 audit: BPF prog-id=30 op=LOAD Aug 13 00:53:06.007000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7a550cd0 a2=94 a3=1 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.008000 audit: BPF prog-id=30 op=UNLOAD Aug 13 00:53:06.008000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.008000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe7a550da0 a2=50 a3=7ffe7a550e80 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.063000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.063000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550ce0 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.063000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066190 systemd-networkd[1054]: cali7dc531145d2: Link UP Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7a550d10 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7a550c20 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550d30 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550d10 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550d00 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.066000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.066000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550d30 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.066000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.067000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.067000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7a550d10 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.067000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.067000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7a550d30 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.067000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.067000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7a550d00 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.067000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.067000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7a550d70 a2=28 a3=0 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.067000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe7a550b20 a2=50 a3=1 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit: BPF prog-id=31 op=LOAD Aug 13 00:53:06.068000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7a550b20 a2=94 a3=5 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.068000 audit: BPF prog-id=31 op=UNLOAD Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe7a550bd0 a2=50 a3=1 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe7a550cf0 a2=4 a3=38 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.068000 audit[4155]: AVC avc: denied { confidentiality } for pid=4155 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:06.068000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7a550d40 a2=94 a3=6 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.068000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { confidentiality } for pid=4155 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:06.072000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7a5504f0 a2=94 a3=88 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.072000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.119894 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7dc531145d2: link becomes ready Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { perfmon } for pid=4155 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.072000 audit[4155]: AVC avc: denied { confidentiality } for pid=4155 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 00:53:06.072000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7a5504f0 a2=94 a3=88 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.072000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.074000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.074000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe7a551f20 a2=10 a3=f8f00800 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.074000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.074000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.074000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe7a551dc0 a2=10 a3=3 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.074000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.074000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.074000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe7a551d60 a2=10 a3=3 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.074000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.074000 audit[4155]: AVC avc: denied { bpf } for pid=4155 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 00:53:06.074000 audit[4155]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe7a551d60 a2=10 a3=7 items=0 ppid=3631 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.074000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 00:53:06.084000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:53:06.124984 env[1301]: time="2025-08-13T00:53:06.090246738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:06.124984 env[1301]: time="2025-08-13T00:53:06.090352394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:06.124984 env[1301]: time="2025-08-13T00:53:06.090388608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:06.124984 env[1301]: time="2025-08-13T00:53:06.092816804Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7 pid=4225 runtime=io.containerd.runc.v2 Aug 13 00:53:06.111399 systemd-networkd[1054]: cali7dc531145d2: Gained carrier Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.584 [INFO][4134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0 calico-apiserver-6c77cb5bfc- calico-apiserver 3eb358bf-331f-4984-a4e4-9d6d55f60ba0 1001 0 2025-08-13 00:52:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c77cb5bfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-8-adc8b0fbd5 calico-apiserver-6c77cb5bfc-pg6q9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7dc531145d2 [] [] }} ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.584 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.758 [INFO][4187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" HandleID="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.759 [INFO][4187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" HandleID="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-8-adc8b0fbd5", "pod":"calico-apiserver-6c77cb5bfc-pg6q9", "timestamp":"2025-08-13 00:53:05.758689227 +0000 UTC"}, Hostname:"ci-3510.3.8-8-adc8b0fbd5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.759 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.879 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.879 [INFO][4187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-8-adc8b0fbd5' Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.935 [INFO][4187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.975 [INFO][4187] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.990 [INFO][4187] ipam/ipam.go 511: Trying affinity for 192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:05.999 [INFO][4187] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.004 [INFO][4187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.192/26 host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.004 [INFO][4187] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.36.192/26 handle="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.012 [INFO][4187] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337 Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.021 [INFO][4187] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.36.192/26 handle="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.038 [INFO][4187] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.36.200/26] block=192.168.36.192/26 handle="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.039 [INFO][4187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.200/26] handle="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" host="ci-3510.3.8-8-adc8b0fbd5" Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.039 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:06.171008 env[1301]: 2025-08-13 00:53:06.039 [INFO][4187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.36.200/26] IPv6=[] ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" HandleID="k8s-pod-network.cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.046 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3eb358bf-331f-4984-a4e4-9d6d55f60ba0", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"", Pod:"calico-apiserver-6c77cb5bfc-pg6q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7dc531145d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.047 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.200/32] ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.047 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dc531145d2 ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.129 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.130 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3eb358bf-331f-4984-a4e4-9d6d55f60ba0", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337", Pod:"calico-apiserver-6c77cb5bfc-pg6q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7dc531145d2", MAC:"96:d4:20:4c:86:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:06.173393 env[1301]: 2025-08-13 00:53:06.152 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337" Namespace="calico-apiserver" Pod="calico-apiserver-6c77cb5bfc-pg6q9" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:06.204756 systemd[1]: run-containerd-runc-k8s.io-389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7-runc.gRfFss.mount: Deactivated successfully. Aug 13 00:53:06.250402 env[1301]: time="2025-08-13T00:53:06.250286058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:53:06.250402 env[1301]: time="2025-08-13T00:53:06.250403210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:53:06.250673 env[1301]: time="2025-08-13T00:53:06.250428674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:53:06.250673 env[1301]: time="2025-08-13T00:53:06.250647090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337 pid=4279 runtime=io.containerd.runc.v2 Aug 13 00:53:06.312667 systemd-networkd[1054]: cali7136af6eada: Gained IPv6LL Aug 13 00:53:06.411592 env[1301]: time="2025-08-13T00:53:06.411529645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8qkv,Uid:e65125c0-f7bb-420d-885a-928dd8165be9,Namespace:calico-system,Attempt:1,} returns sandbox id \"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7\"" Aug 13 00:53:06.436584 env[1301]: time="2025-08-13T00:53:06.434869230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c77cb5bfc-pg6q9,Uid:3eb358bf-331f-4984-a4e4-9d6d55f60ba0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337\"" Aug 13 00:53:06.440000 audit[4336]: NETFILTER_CFG table=mangle:109 family=2 entries=16 op=nft_register_chain pid=4336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:53:06.440000 audit[4336]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd9314fe20 a2=0 a3=7ffd9314fe0c items=0 ppid=3631 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.440000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:53:06.454000 audit[4328]: NETFILTER_CFG table=nat:110 family=2 entries=15 op=nft_register_chain pid=4328 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:53:06.454000 audit[4328]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffee7a03500 a2=0 a3=7ffee7a034ec items=0 ppid=3631 pid=4328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.454000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:53:06.478000 audit[4329]: NETFILTER_CFG table=raw:111 family=2 entries=21 op=nft_register_chain pid=4329 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:53:06.478000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcd388d500 a2=0 a3=7ffcd388d4ec items=0 ppid=3631 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:53:06.483000 audit[4335]: NETFILTER_CFG table=filter:112 family=2 entries=228 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:53:06.483000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=132672 a0=3 a1=7ffd503e1bf0 a2=0 a3=55889b3d9000 items=0 ppid=3631 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.483000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:53:06.506620 systemd-networkd[1054]: vxlan.calico: Gained IPv6LL Aug 13 00:53:06.556000 audit[4351]: NETFILTER_CFG table=filter:113 family=2 entries=125 op=nft_register_chain pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 00:53:06.556000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=70096 a0=3 a1=7ffc66961e90 a2=0 a3=7ffc66961e7c items=0 ppid=3631 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:06.556000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 00:53:06.633471 kubelet[2099]: E0813 00:53:06.633129 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:06.953000 systemd-networkd[1054]: calif28cb6ef70c: Gained IPv6LL Aug 13 00:53:07.728718 systemd-networkd[1054]: cali7dc531145d2: Gained IPv6LL Aug 13 00:53:07.990653 env[1301]: time="2025-08-13T00:53:07.990252271Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:07.992920 env[1301]: time="2025-08-13T00:53:07.992861387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:07.995722 env[1301]: time="2025-08-13T00:53:07.995666900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:07.997726 env[1301]: time="2025-08-13T00:53:07.997638800Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:07.998741 env[1301]: time="2025-08-13T00:53:07.998690130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:53:08.001825 env[1301]: time="2025-08-13T00:53:08.001262998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:53:08.029992 env[1301]: time="2025-08-13T00:53:08.029930483Z" level=info msg="CreateContainer within sandbox \"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:53:08.060723 env[1301]: time="2025-08-13T00:53:08.060651287Z" level=info msg="CreateContainer within sandbox \"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11\"" Aug 13 00:53:08.062708 env[1301]: time="2025-08-13T00:53:08.062660418Z" level=info msg="StartContainer for \"d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11\"" Aug 13 00:53:08.259952 env[1301]: time="2025-08-13T00:53:08.259105628Z" level=info msg="StartContainer for \"d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11\" returns successfully" Aug 13 00:53:08.600542 systemd[1]: Started sshd@7-137.184.32.218:22-139.178.68.195:52954.service. Aug 13 00:53:08.608832 kernel: kauditd_printk_skb: 580 callbacks suppressed Aug 13 00:53:08.608978 kernel: audit: type=1130 audit(1755046388.600:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.32.218:22-139.178.68.195:52954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:08.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.32.218:22-139.178.68.195:52954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:08.717781 kubelet[2099]: I0813 00:53:08.717661 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d798fdc4f-v55r9" podStartSLOduration=25.067697112 podStartE2EDuration="31.717625537s" podCreationTimestamp="2025-08-13 00:52:37 +0000 UTC" firstStartedPulling="2025-08-13 00:53:01.350433621 +0000 UTC m=+45.513124993" lastFinishedPulling="2025-08-13 00:53:08.000362049 +0000 UTC m=+52.163053418" observedRunningTime="2025-08-13 00:53:08.716986283 +0000 UTC m=+52.879677687" watchObservedRunningTime="2025-08-13 00:53:08.717625537 +0000 UTC m=+52.880316924" Aug 13 00:53:08.773000 audit[4400]: USER_ACCT pid=4400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.778628 kernel: audit: type=1101 audit(1755046388.773:426): pid=4400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.780121 sshd[4400]: Accepted publickey for core from 139.178.68.195 port 52954 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:08.782000 audit[4400]: CRED_ACQ pid=4400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.786545 kernel: audit: type=1103 audit(1755046388.782:427): pid=4400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.796293 kernel: audit: type=1006 audit(1755046388.788:428): pid=4400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Aug 13 00:53:08.796473 kernel: audit: type=1300 audit(1755046388.788:428): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1fdc7960 a2=3 a3=0 items=0 ppid=1 pid=4400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:08.796886 kernel: audit: type=1327 audit(1755046388.788:428): proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:08.788000 audit[4400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1fdc7960 a2=3 a3=0 items=0 ppid=1 pid=4400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:08.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:08.798610 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:08.829561 systemd-logind[1292]: New session 8 of user core. Aug 13 00:53:08.830665 systemd[1]: Started session-8.scope. Aug 13 00:53:08.851488 kernel: audit: type=1105 audit(1755046388.842:429): pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.851650 kernel: audit: type=1103 audit(1755046388.842:430): pid=4421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.842000 audit[4400]: USER_START pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:08.842000 audit[4421]: CRED_ACQ pid=4421 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:09.624808 env[1301]: time="2025-08-13T00:53:09.624415843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:09.628045 env[1301]: time="2025-08-13T00:53:09.627996590Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:09.630810 env[1301]: time="2025-08-13T00:53:09.630765389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:09.633052 env[1301]: time="2025-08-13T00:53:09.633013281Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:09.634290 env[1301]: time="2025-08-13T00:53:09.634235448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:53:09.637090 env[1301]: time="2025-08-13T00:53:09.637032267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:53:09.641862 env[1301]: time="2025-08-13T00:53:09.641796826Z" level=info msg="CreateContainer within sandbox \"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:53:09.693952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436313480.mount: Deactivated successfully. Aug 13 00:53:09.723540 env[1301]: time="2025-08-13T00:53:09.723480239Z" level=info msg="CreateContainer within sandbox \"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"60325833a2e0ae8bb7ab77f3ce38c9b3ffcb0fbfddefaaaabc01311eb558f492\"" Aug 13 00:53:09.724796 env[1301]: time="2025-08-13T00:53:09.724744065Z" level=info msg="StartContainer for \"60325833a2e0ae8bb7ab77f3ce38c9b3ffcb0fbfddefaaaabc01311eb558f492\"" Aug 13 00:53:09.735508 sshd[4400]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:09.747708 kernel: audit: type=1106 audit(1755046389.737:431): pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:09.747912 kernel: audit: type=1104 audit(1755046389.737:432): pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:09.737000 audit[4400]: USER_END pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:09.737000 audit[4400]: CRED_DISP pid=4400 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:09.747423 systemd[1]: sshd@7-137.184.32.218:22-139.178.68.195:52954.service: Deactivated successfully. Aug 13 00:53:09.748884 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:53:09.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.32.218:22-139.178.68.195:52954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:09.792865 systemd-logind[1292]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:53:09.797097 systemd-logind[1292]: Removed session 8. Aug 13 00:53:09.861317 env[1301]: time="2025-08-13T00:53:09.861260420Z" level=info msg="StartContainer for \"60325833a2e0ae8bb7ab77f3ce38c9b3ffcb0fbfddefaaaabc01311eb558f492\" returns successfully" Aug 13 00:53:12.938869 env[1301]: time="2025-08-13T00:53:12.938734678Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:12.941467 env[1301]: time="2025-08-13T00:53:12.941025780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:12.942724 env[1301]: time="2025-08-13T00:53:12.942668937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:12.944319 env[1301]: time="2025-08-13T00:53:12.944280943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:12.944988 env[1301]: time="2025-08-13T00:53:12.944954475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:53:12.949988 env[1301]: time="2025-08-13T00:53:12.949937564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:53:12.957773 env[1301]: time="2025-08-13T00:53:12.957726400Z" level=info msg="CreateContainer within sandbox \"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:53:12.982008 env[1301]: time="2025-08-13T00:53:12.981925389Z" level=info msg="CreateContainer within sandbox \"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41dd5eb9b0f5043c85277843acaf9182ea030b229ecf85c29c48895ffdcef09e\"" Aug 13 00:53:12.985901 env[1301]: time="2025-08-13T00:53:12.985863313Z" level=info msg="StartContainer for \"41dd5eb9b0f5043c85277843acaf9182ea030b229ecf85c29c48895ffdcef09e\"" Aug 13 00:53:13.045533 systemd[1]: run-containerd-runc-k8s.io-41dd5eb9b0f5043c85277843acaf9182ea030b229ecf85c29c48895ffdcef09e-runc.CzRzVB.mount: Deactivated successfully. Aug 13 00:53:13.126337 env[1301]: time="2025-08-13T00:53:13.126271038Z" level=info msg="StartContainer for \"41dd5eb9b0f5043c85277843acaf9182ea030b229ecf85c29c48895ffdcef09e\" returns successfully" Aug 13 00:53:13.795482 kubelet[2099]: I0813 00:53:13.790554 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-vlzdm" podStartSLOduration=34.589180707 podStartE2EDuration="42.788659027s" podCreationTimestamp="2025-08-13 00:52:31 +0000 UTC" firstStartedPulling="2025-08-13 00:53:04.749979005 +0000 UTC m=+48.912670377" lastFinishedPulling="2025-08-13 00:53:12.949457331 +0000 UTC m=+57.112148697" observedRunningTime="2025-08-13 00:53:13.782662678 +0000 UTC m=+57.945354067" watchObservedRunningTime="2025-08-13 00:53:13.788659027 +0000 UTC m=+57.951350415" Aug 13 00:53:13.886000 audit[4519]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:13.897742 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:53:13.897989 kernel: audit: type=1325 audit(1755046393.886:434): table=filter:114 family=2 entries=14 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:13.898055 kernel: audit: type=1300 audit(1755046393.886:434): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3703d430 a2=0 a3=7ffc3703d41c items=0 ppid=2236 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:13.898099 kernel: audit: type=1327 audit(1755046393.886:434): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:13.886000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3703d430 a2=0 a3=7ffc3703d41c items=0 ppid=2236 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:13.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:13.898000 audit[4519]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:13.898000 audit[4519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3703d430 a2=0 a3=7ffc3703d41c items=0 ppid=2236 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:13.907571 kernel: audit: type=1325 audit(1755046393.898:435): table=nat:115 family=2 entries=20 op=nft_register_rule pid=4519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:13.907725 kernel: audit: type=1300 audit(1755046393.898:435): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3703d430 a2=0 a3=7ffc3703d41c items=0 ppid=2236 pid=4519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:13.907760 kernel: audit: type=1327 audit(1755046393.898:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:13.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:14.745701 systemd[1]: Started sshd@8-137.184.32.218:22-139.178.68.195:55878.service. Aug 13 00:53:14.752774 kernel: audit: type=1130 audit(1755046394.745:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.32.218:22-139.178.68.195:55878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:14.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.32.218:22-139.178.68.195:55878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:15.031501 kubelet[2099]: I0813 00:53:15.029952 2099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:53:15.063000 audit[4520]: USER_ACCT pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:15.069210 kernel: audit: type=1101 audit(1755046395.063:437): pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:15.072767 sshd[4520]: Accepted publickey for core from 139.178.68.195 port 55878 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:15.079600 kernel: audit: type=1103 audit(1755046395.073:438): pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:15.073000 audit[4520]: CRED_ACQ pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:15.078000 audit[4520]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb7e4d640 a2=3 a3=0 items=0 ppid=1 pid=4520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:15.083926 kernel: audit: type=1006 audit(1755046395.078:439): pid=4520 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Aug 13 00:53:15.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:15.084958 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:15.099835 systemd-logind[1292]: New session 9 of user core. Aug 13 00:53:15.101390 systemd[1]: Started session-9.scope. Aug 13 00:53:15.119000 audit[4520]: USER_START pid=4520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:15.121000 audit[4524]: CRED_ACQ pid=4524 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:16.193979 sshd[4520]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:16.197000 audit[4520]: USER_END pid=4520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:16.197000 audit[4520]: CRED_DISP pid=4520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:16.199786 systemd[1]: sshd@8-137.184.32.218:22-139.178.68.195:55878.service: Deactivated successfully. Aug 13 00:53:16.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.32.218:22-139.178.68.195:55878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:16.201336 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:53:16.204229 systemd-logind[1292]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:53:16.205885 systemd-logind[1292]: Removed session 9. Aug 13 00:53:16.421181 env[1301]: time="2025-08-13T00:53:16.421127637Z" level=info msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" Aug 13 00:53:16.508008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945913805.mount: Deactivated successfully. Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.777 [WARNING][4545] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0", GenerateName:"calico-kube-controllers-6d798fdc4f-", Namespace:"calico-system", SelfLink:"", UID:"ca17bcc4-9279-4936-bb62-b2a432984a63", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d798fdc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22", Pod:"calico-kube-controllers-6d798fdc4f-v55r9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d5b436c8b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.780 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.780 [INFO][4545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" iface="eth0" netns="" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.780 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.780 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.972 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.975 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.976 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.989 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.989 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.992 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:16.999661 env[1301]: 2025-08-13 00:53:16.996 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:17.002617 env[1301]: time="2025-08-13T00:53:16.999716059Z" level=info msg="TearDown network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" successfully" Aug 13 00:53:17.002617 env[1301]: time="2025-08-13T00:53:16.999766221Z" level=info msg="StopPodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" returns successfully" Aug 13 00:53:17.028101 env[1301]: time="2025-08-13T00:53:17.028039091Z" level=info msg="RemovePodSandbox for \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" Aug 13 00:53:17.028322 env[1301]: time="2025-08-13T00:53:17.028104690Z" level=info msg="Forcibly stopping sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\"" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.111 [WARNING][4576] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0", GenerateName:"calico-kube-controllers-6d798fdc4f-", Namespace:"calico-system", SelfLink:"", UID:"ca17bcc4-9279-4936-bb62-b2a432984a63", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d798fdc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ba27a1ea04307b91297fac5ce895e909fe314cc1599e5746a85ee8491cabca22", Pod:"calico-kube-controllers-6d798fdc4f-v55r9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d5b436c8b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.112 [INFO][4576] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.112 [INFO][4576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" iface="eth0" netns="" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.112 [INFO][4576] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.112 [INFO][4576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.178 [INFO][4583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.178 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.179 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.189 [WARNING][4583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.189 [INFO][4583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" HandleID="k8s-pod-network.6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--kube--controllers--6d798fdc4f--v55r9-eth0" Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.192 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:17.210817 env[1301]: 2025-08-13 00:53:17.198 [INFO][4576] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2" Aug 13 00:53:17.212199 env[1301]: time="2025-08-13T00:53:17.210873246Z" level=info msg="TearDown network for sandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" successfully" Aug 13 00:53:17.217244 env[1301]: time="2025-08-13T00:53:17.217169540Z" level=info msg="RemovePodSandbox \"6ba15892ec2edca9cffdc5b2a10186f437aa7f324fa73ba3c187933af5116fa2\" returns successfully" Aug 13 00:53:17.229618 env[1301]: time="2025-08-13T00:53:17.229561667Z" level=info msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.353 [WARNING][4598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"286d844b-f8f2-4cbc-961c-669a123d9626", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc", Pod:"coredns-7c65d6cfc9-vhxc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42e6eeb4c73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.354 [INFO][4598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.354 [INFO][4598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" iface="eth0" netns="" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.354 [INFO][4598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.354 [INFO][4598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.398 [INFO][4605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.399 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.399 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.409 [WARNING][4605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.409 [INFO][4605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.412 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:17.424697 env[1301]: 2025-08-13 00:53:17.421 [INFO][4598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.427126 env[1301]: time="2025-08-13T00:53:17.425670202Z" level=info msg="TearDown network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" successfully" Aug 13 00:53:17.427126 env[1301]: time="2025-08-13T00:53:17.425724573Z" level=info msg="StopPodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" returns successfully" Aug 13 00:53:17.427712 env[1301]: time="2025-08-13T00:53:17.427669018Z" level=info msg="RemovePodSandbox for \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" Aug 13 00:53:17.427991 env[1301]: time="2025-08-13T00:53:17.427926269Z" level=info msg="Forcibly stopping sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\"" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.520 [WARNING][4621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"286d844b-f8f2-4cbc-961c-669a123d9626", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b9d0bf7544b587545b6d35775bd4867ad7f2043c50061ff17bf1643242d0fdcc", Pod:"coredns-7c65d6cfc9-vhxc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42e6eeb4c73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.521 [INFO][4621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.521 [INFO][4621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" iface="eth0" netns="" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.521 [INFO][4621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.521 [INFO][4621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.587 [INFO][4628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.588 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.588 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.602 [WARNING][4628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.602 [INFO][4628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" HandleID="k8s-pod-network.731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--vhxc6-eth0" Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.605 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:17.611111 env[1301]: 2025-08-13 00:53:17.608 [INFO][4621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c" Aug 13 00:53:17.612800 env[1301]: time="2025-08-13T00:53:17.611569050Z" level=info msg="TearDown network for sandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" successfully" Aug 13 00:53:17.615471 env[1301]: time="2025-08-13T00:53:17.615400085Z" level=info msg="RemovePodSandbox \"731c9e04eaf85c5d82509fb1c9d6a51fd443e7f96e0dd5fb535e2d0357234b2c\" returns successfully" Aug 13 00:53:17.616466 env[1301]: time="2025-08-13T00:53:17.616401851Z" level=info msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.688 [WARNING][4644] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3eb358bf-331f-4984-a4e4-9d6d55f60ba0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337", Pod:"calico-apiserver-6c77cb5bfc-pg6q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7dc531145d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.689 [INFO][4644] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.689 [INFO][4644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" iface="eth0" netns="" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.689 [INFO][4644] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.689 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.752 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.752 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.752 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.760 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.760 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.763 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:17.768296 env[1301]: 2025-08-13 00:53:17.766 [INFO][4644] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.770295 env[1301]: time="2025-08-13T00:53:17.769536820Z" level=info msg="TearDown network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" successfully" Aug 13 00:53:17.770295 env[1301]: time="2025-08-13T00:53:17.769581446Z" level=info msg="StopPodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" returns successfully" Aug 13 00:53:17.770295 env[1301]: time="2025-08-13T00:53:17.770204496Z" level=info msg="RemovePodSandbox for \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" Aug 13 00:53:17.770295 env[1301]: time="2025-08-13T00:53:17.770239105Z" level=info msg="Forcibly stopping sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\"" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.853 [WARNING][4667] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3eb358bf-331f-4984-a4e4-9d6d55f60ba0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337", Pod:"calico-apiserver-6c77cb5bfc-pg6q9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7dc531145d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.853 [INFO][4667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.853 [INFO][4667] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" iface="eth0" netns="" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.853 [INFO][4667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.853 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.895 [INFO][4675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.895 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.895 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.916 [WARNING][4675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.916 [INFO][4675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" HandleID="k8s-pod-network.3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--pg6q9-eth0" Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.925 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:17.938259 env[1301]: 2025-08-13 00:53:17.934 [INFO][4667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8" Aug 13 00:53:17.939115 env[1301]: time="2025-08-13T00:53:17.939076292Z" level=info msg="TearDown network for sandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" successfully" Aug 13 00:53:17.942026 env[1301]: time="2025-08-13T00:53:17.941979561Z" level=info msg="RemovePodSandbox \"3e7c0bc2afbc439ab1326f91936f2e0222c7fa30c7db59056fdad45669d210a8\" returns successfully" Aug 13 00:53:17.943929 env[1301]: time="2025-08-13T00:53:17.942888878Z" level=info msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" Aug 13 00:53:17.988309 env[1301]: time="2025-08-13T00:53:17.988212167Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:17.993535 env[1301]: time="2025-08-13T00:53:17.991328339Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.021249 env[1301]: time="2025-08-13T00:53:18.021010384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.026170 env[1301]: time="2025-08-13T00:53:18.026112319Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:18.028276 env[1301]: time="2025-08-13T00:53:18.027428965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:53:18.030996 env[1301]: time="2025-08-13T00:53:18.030949552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:53:18.065615 env[1301]: time="2025-08-13T00:53:18.065568236Z" level=info msg="CreateContainer within sandbox \"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:53:18.100272 env[1301]: time="2025-08-13T00:53:18.100191804Z" level=info msg="CreateContainer within sandbox \"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1aa9c32ae5b5f8e45ef45b46e6a809232e6a6a7abde8524fefb2daa5035f5887\"" Aug 13 00:53:18.107761 env[1301]: time="2025-08-13T00:53:18.107712220Z" level=info msg="StartContainer for \"1aa9c32ae5b5f8e45ef45b46e6a809232e6a6a7abde8524fefb2daa5035f5887\"" Aug 13 00:53:18.139010 systemd[1]: run-containerd-runc-k8s.io-d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11-runc.55NHqq.mount: Deactivated successfully. Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.078 [WARNING][4691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"167dde7c-6e36-48ea-bd63-42e66d6a64d2", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d", Pod:"calico-apiserver-6c77cb5bfc-vlzdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb42a18103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.078 [INFO][4691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.078 [INFO][4691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" iface="eth0" netns="" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.079 [INFO][4691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.079 [INFO][4691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.165 [INFO][4705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.169 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.169 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.181 [WARNING][4705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.181 [INFO][4705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.184 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:18.196372 env[1301]: 2025-08-13 00:53:18.187 [INFO][4691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.196372 env[1301]: time="2025-08-13T00:53:18.194490666Z" level=info msg="TearDown network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" successfully" Aug 13 00:53:18.198369 env[1301]: time="2025-08-13T00:53:18.194539370Z" level=info msg="StopPodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" returns successfully" Aug 13 00:53:18.198978 env[1301]: time="2025-08-13T00:53:18.198919322Z" level=info msg="RemovePodSandbox for \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" Aug 13 00:53:18.201332 env[1301]: time="2025-08-13T00:53:18.201121383Z" level=info msg="Forcibly stopping sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\"" Aug 13 00:53:18.351048 env[1301]: time="2025-08-13T00:53:18.350938080Z" level=info msg="StartContainer for \"1aa9c32ae5b5f8e45ef45b46e6a809232e6a6a7abde8524fefb2daa5035f5887\" returns successfully" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.349 [WARNING][4748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0", GenerateName:"calico-apiserver-6c77cb5bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"167dde7c-6e36-48ea-bd63-42e66d6a64d2", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c77cb5bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"ffd09ba60a957ba77e2bcad450717b14d1681d7a2b919c79881c042324f1f62d", Pod:"calico-apiserver-6c77cb5bfc-vlzdm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcb42a18103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.349 [INFO][4748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.350 [INFO][4748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" iface="eth0" netns="" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.350 [INFO][4748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.350 [INFO][4748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.392 [INFO][4775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.393 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.393 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.415 [WARNING][4775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.415 [INFO][4775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" HandleID="k8s-pod-network.4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-calico--apiserver--6c77cb5bfc--vlzdm-eth0" Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.420 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:18.427980 env[1301]: 2025-08-13 00:53:18.423 [INFO][4748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680" Aug 13 00:53:18.430028 env[1301]: time="2025-08-13T00:53:18.429971292Z" level=info msg="TearDown network for sandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" successfully" Aug 13 00:53:18.434268 env[1301]: time="2025-08-13T00:53:18.434212160Z" level=info msg="RemovePodSandbox \"4f1f65078c638b99aaf83aa9d68e85e1899e85b02e3027a81c4e8f3250d96680\" returns successfully" Aug 13 00:53:18.435390 env[1301]: time="2025-08-13T00:53:18.435347817Z" level=info msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.525 [WARNING][4793] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e647cce1-5592-4786-90e1-64c87a11f433", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9", Pod:"goldmane-58fd7646b9-qzqlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7136af6eada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.525 [INFO][4793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.525 [INFO][4793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" iface="eth0" netns="" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.525 [INFO][4793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.525 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.599 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.599 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.599 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.609 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.609 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.613 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:18.618622 env[1301]: 2025-08-13 00:53:18.615 [INFO][4793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.621047 env[1301]: time="2025-08-13T00:53:18.618849122Z" level=info msg="TearDown network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" successfully" Aug 13 00:53:18.621047 env[1301]: time="2025-08-13T00:53:18.618906220Z" level=info msg="StopPodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" returns successfully" Aug 13 00:53:18.621340 env[1301]: time="2025-08-13T00:53:18.621305252Z" level=info msg="RemovePodSandbox for \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" Aug 13 00:53:18.621537 env[1301]: time="2025-08-13T00:53:18.621473006Z" level=info msg="Forcibly stopping sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\"" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.710 [WARNING][4815] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"e647cce1-5592-4786-90e1-64c87a11f433", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"076eb4f661e07fd31766718730c1224ae0025f819c35aa904347540b9ef3d8e9", Pod:"goldmane-58fd7646b9-qzqlj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7136af6eada", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.710 [INFO][4815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.710 [INFO][4815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" iface="eth0" netns="" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.710 [INFO][4815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.710 [INFO][4815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.758 [INFO][4824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.759 [INFO][4824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.759 [INFO][4824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.775 [WARNING][4824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.775 [INFO][4824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" HandleID="k8s-pod-network.fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-goldmane--58fd7646b9--qzqlj-eth0" Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.786 [INFO][4824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:18.794636 env[1301]: 2025-08-13 00:53:18.791 [INFO][4815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1" Aug 13 00:53:18.795908 env[1301]: time="2025-08-13T00:53:18.795138460Z" level=info msg="TearDown network for sandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" successfully" Aug 13 00:53:18.802622 env[1301]: time="2025-08-13T00:53:18.802496458Z" level=info msg="RemovePodSandbox \"fc8eee40653d3ab7684b94dad88d057b09db162741d9bbd95a4fac4493c6abf1\" returns successfully" Aug 13 00:53:18.803871 env[1301]: time="2025-08-13T00:53:18.803825990Z" level=info msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.893 [WARNING][4855] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.894 [INFO][4855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.894 [INFO][4855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" iface="eth0" netns="" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.894 [INFO][4855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.894 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.963 [INFO][4867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.966 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.967 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.979 [WARNING][4867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.979 [INFO][4867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.984 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:18.997769 env[1301]: 2025-08-13 00:53:18.991 [INFO][4855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:18.997769 env[1301]: time="2025-08-13T00:53:18.997516941Z" level=info msg="TearDown network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" successfully" Aug 13 00:53:18.997769 env[1301]: time="2025-08-13T00:53:18.997554285Z" level=info msg="StopPodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" returns successfully" Aug 13 00:53:18.999871 env[1301]: time="2025-08-13T00:53:18.998871850Z" level=info msg="RemovePodSandbox for \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" Aug 13 00:53:18.999871 env[1301]: time="2025-08-13T00:53:18.998908059Z" level=info msg="Forcibly stopping sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\"" Aug 13 00:53:19.033258 kubelet[2099]: I0813 00:53:19.025970 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qzqlj" podStartSLOduration=31.000260949 podStartE2EDuration="43.016951664s" podCreationTimestamp="2025-08-13 00:52:36 +0000 UTC" firstStartedPulling="2025-08-13 00:53:06.013958328 +0000 UTC m=+50.176649700" lastFinishedPulling="2025-08-13 00:53:18.030649029 +0000 UTC m=+62.193340415" observedRunningTime="2025-08-13 00:53:19.011625601 +0000 UTC m=+63.174316988" watchObservedRunningTime="2025-08-13 00:53:19.016951664 +0000 UTC m=+63.179643051" Aug 13 00:53:19.121755 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:53:19.122341 kernel: audit: type=1325 audit(1755046399.111:445): table=filter:116 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:19.125265 kernel: audit: type=1300 audit(1755046399.111:445): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe47c2aa90 a2=0 a3=7ffe47c2aa7c items=0 ppid=2236 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:19.125366 kernel: audit: type=1327 audit(1755046399.111:445): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:19.111000 audit[4886]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:19.137949 kernel: audit: type=1325 audit(1755046399.122:446): table=nat:117 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:19.138508 kernel: audit: type=1300 audit(1755046399.122:446): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe47c2aa90 a2=0 a3=7ffe47c2aa7c items=0 ppid=2236 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:19.138584 kernel: audit: type=1327 audit(1755046399.122:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:19.111000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe47c2aa90 a2=0 a3=7ffe47c2aa7c items=0 ppid=2236 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:19.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:19.122000 audit[4886]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:19.122000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe47c2aa90 a2=0 a3=7ffe47c2aa7c items=0 ppid=2236 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:19.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.154 [WARNING][4881] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" WorkloadEndpoint="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.155 [INFO][4881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.155 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" iface="eth0" netns="" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.155 [INFO][4881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.155 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.258 [INFO][4890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.259 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.259 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.292 [WARNING][4890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.292 [INFO][4890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" HandleID="k8s-pod-network.67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-whisker--6b78684bd4--f847k-eth0" Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.297 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:19.314400 env[1301]: 2025-08-13 00:53:19.310 [INFO][4881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b" Aug 13 00:53:19.315811 env[1301]: time="2025-08-13T00:53:19.315591934Z" level=info msg="TearDown network for sandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" successfully" Aug 13 00:53:19.320379 env[1301]: time="2025-08-13T00:53:19.319757592Z" level=info msg="RemovePodSandbox \"67956802524eeddb45f6b09c45b4254502ed1cc8005141d77703e618d5e2044b\" returns successfully" Aug 13 00:53:19.370477 env[1301]: time="2025-08-13T00:53:19.370416882Z" level=info msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.464 [WARNING][4905] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e65125c0-f7bb-420d-885a-928dd8165be9", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7", Pod:"csi-node-driver-n8qkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif28cb6ef70c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.466 [INFO][4905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.466 [INFO][4905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" iface="eth0" netns="" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.466 [INFO][4905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.466 [INFO][4905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.553 [INFO][4912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.553 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.554 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.563 [WARNING][4912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.563 [INFO][4912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.566 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:19.588019 env[1301]: 2025-08-13 00:53:19.580 [INFO][4905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.588019 env[1301]: time="2025-08-13T00:53:19.583398431Z" level=info msg="TearDown network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" successfully" Aug 13 00:53:19.588019 env[1301]: time="2025-08-13T00:53:19.583433799Z" level=info msg="StopPodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" returns successfully" Aug 13 00:53:19.649825 env[1301]: time="2025-08-13T00:53:19.649755800Z" level=info msg="RemovePodSandbox for \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" Aug 13 00:53:19.649825 env[1301]: time="2025-08-13T00:53:19.649807632Z" level=info msg="Forcibly stopping sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\"" Aug 13 00:53:19.821167 env[1301]: time="2025-08-13T00:53:19.821101613Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:19.826835 env[1301]: time="2025-08-13T00:53:19.826774438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:19.828012 env[1301]: time="2025-08-13T00:53:19.827968724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:19.832093 env[1301]: time="2025-08-13T00:53:19.831193646Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:19.834139 env[1301]: time="2025-08-13T00:53:19.832898273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:53:19.837851 env[1301]: time="2025-08-13T00:53:19.836339583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:53:19.841278 env[1301]: time="2025-08-13T00:53:19.841124468Z" level=info msg="CreateContainer within sandbox \"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.754 [WARNING][4926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e65125c0-f7bb-420d-885a-928dd8165be9", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7", Pod:"csi-node-driver-n8qkv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif28cb6ef70c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.755 [INFO][4926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.755 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" iface="eth0" netns="" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.755 [INFO][4926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.755 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.819 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.819 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.819 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.832 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.832 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" HandleID="k8s-pod-network.8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-csi--node--driver--n8qkv-eth0" Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.835 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:19.849773 env[1301]: 2025-08-13 00:53:19.845 [INFO][4926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627" Aug 13 00:53:19.851136 env[1301]: time="2025-08-13T00:53:19.850432269Z" level=info msg="TearDown network for sandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" successfully" Aug 13 00:53:19.855706 env[1301]: time="2025-08-13T00:53:19.855635654Z" level=info msg="RemovePodSandbox \"8bf013f1466e855d71e312b26f716daa933eb35db83ef199c4900e1f9f2e5627\" returns successfully" Aug 13 00:53:19.874282 env[1301]: time="2025-08-13T00:53:19.874218484Z" level=info msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" Aug 13 00:53:19.875722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526988841.mount: Deactivated successfully. Aug 13 00:53:19.886257 env[1301]: time="2025-08-13T00:53:19.886169807Z" level=info msg="CreateContainer within sandbox \"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ef3346b2c586d7a01937db82122c192eb6f1624e8727e35227023902758f553e\"" Aug 13 00:53:19.891043 env[1301]: time="2025-08-13T00:53:19.890979364Z" level=info msg="StartContainer for \"ef3346b2c586d7a01937db82122c192eb6f1624e8727e35227023902758f553e\"" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.008 [WARNING][4950] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5af231c3-7046-4023-9f1c-637c842bb333", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762", Pod:"coredns-7c65d6cfc9-5846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali400aa52c920", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.011 [INFO][4950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.011 [INFO][4950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" iface="eth0" netns="" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.012 [INFO][4950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.012 [INFO][4950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.058 [INFO][4992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.059 [INFO][4992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.059 [INFO][4992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.067 [WARNING][4992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.067 [INFO][4992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.070 [INFO][4992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:20.077091 env[1301]: 2025-08-13 00:53:20.073 [INFO][4950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.078877 env[1301]: time="2025-08-13T00:53:20.077142824Z" level=info msg="TearDown network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" successfully" Aug 13 00:53:20.078877 env[1301]: time="2025-08-13T00:53:20.077191227Z" level=info msg="StopPodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" returns successfully" Aug 13 00:53:20.080349 env[1301]: time="2025-08-13T00:53:20.080291635Z" level=info msg="RemovePodSandbox for \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" Aug 13 00:53:20.080592 env[1301]: time="2025-08-13T00:53:20.080365938Z" level=info msg="Forcibly stopping sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\"" Aug 13 00:53:20.159511 env[1301]: time="2025-08-13T00:53:20.157712270Z" level=info msg="StartContainer for \"ef3346b2c586d7a01937db82122c192eb6f1624e8727e35227023902758f553e\" returns successfully" Aug 13 00:53:20.239973 env[1301]: time="2025-08-13T00:53:20.239907509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.242494 env[1301]: time="2025-08-13T00:53:20.242399097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.246841 env[1301]: time="2025-08-13T00:53:20.246762323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.252411 env[1301]: time="2025-08-13T00:53:20.252359803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:20.252971 env[1301]: time="2025-08-13T00:53:20.252901430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:53:20.260032 env[1301]: time="2025-08-13T00:53:20.259976488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:53:20.264181 env[1301]: time="2025-08-13T00:53:20.264116105Z" level=info msg="CreateContainer within sandbox \"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:53:20.295752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254210997.mount: Deactivated successfully. Aug 13 00:53:20.296489 env[1301]: time="2025-08-13T00:53:20.296400045Z" level=info msg="CreateContainer within sandbox \"cf43d8428c4c66aaa17b050399297698d9e0cae3654920ec5689dc192e7de337\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"20725f1e379a4cfc8c417127273879af6c68bb808b4e6ac50822127457632707\"" Aug 13 00:53:20.305636 env[1301]: time="2025-08-13T00:53:20.305589241Z" level=info msg="StartContainer for \"20725f1e379a4cfc8c417127273879af6c68bb808b4e6ac50822127457632707\"" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.188 [WARNING][5012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"5af231c3-7046-4023-9f1c-637c842bb333", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 52, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-8-adc8b0fbd5", ContainerID:"b8d27689002c6e6b53c0a58f6e70c335ddf9b65605b1851afb41bbb79415f762", Pod:"coredns-7c65d6cfc9-5846w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali400aa52c920", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.189 [INFO][5012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.189 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" iface="eth0" netns="" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.189 [INFO][5012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.189 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.241 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.241 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.241 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.267 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.268 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" HandleID="k8s-pod-network.03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Workload="ci--3510.3.8--8--adc8b0fbd5-k8s-coredns--7c65d6cfc9--5846w-eth0" Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.271 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:53:20.326780 env[1301]: 2025-08-13 00:53:20.304 [INFO][5012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6" Aug 13 00:53:20.332757 env[1301]: time="2025-08-13T00:53:20.327070123Z" level=info msg="TearDown network for sandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" successfully" Aug 13 00:53:20.338504 env[1301]: time="2025-08-13T00:53:20.338404540Z" level=info msg="RemovePodSandbox \"03940213374b72d834c8e89bd738313660b0a052f1e547c9f9ccd6a7faca6cc6\" returns successfully" Aug 13 00:53:20.467086 env[1301]: time="2025-08-13T00:53:20.464838051Z" level=info msg="StartContainer for \"20725f1e379a4cfc8c417127273879af6c68bb808b4e6ac50822127457632707\" returns successfully" Aug 13 00:53:21.090163 systemd[1]: run-containerd-runc-k8s.io-20725f1e379a4cfc8c417127273879af6c68bb808b4e6ac50822127457632707-runc.MGkF3G.mount: Deactivated successfully. Aug 13 00:53:21.096000 audit[5091]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:21.096000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefbdd26c0 a2=0 a3=7ffefbdd26ac items=0 ppid=2236 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:21.104828 kernel: audit: type=1325 audit(1755046401.096:447): table=filter:118 family=2 entries=14 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:21.105086 kernel: audit: type=1300 audit(1755046401.096:447): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefbdd26c0 a2=0 a3=7ffefbdd26ac items=0 ppid=2236 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:21.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:21.112064 kernel: audit: type=1327 audit(1755046401.096:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:21.106000 audit[5091]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:21.106000 audit[5091]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffefbdd26c0 a2=0 a3=7ffefbdd26ac items=0 ppid=2236 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:21.129683 kernel: audit: type=1325 audit(1755046401.106:448): table=nat:119 family=2 entries=20 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:21.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:21.210567 systemd[1]: Started sshd@9-137.184.32.218:22-139.178.68.195:50542.service. Aug 13 00:53:21.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.32.218:22-139.178.68.195:50542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.339000 audit[5096]: USER_ACCT pid=5096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:21.341189 sshd[5096]: Accepted publickey for core from 139.178.68.195 port 50542 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:21.343000 audit[5096]: CRED_ACQ pid=5096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:21.343000 audit[5096]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdee504e0 a2=3 a3=0 items=0 ppid=1 pid=5096 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:21.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:21.348044 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:21.372099 systemd[1]: Started session-10.scope. Aug 13 00:53:21.373595 systemd-logind[1292]: New session 10 of user core. Aug 13 00:53:21.392000 audit[5096]: USER_START pid=5096 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:21.395000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.442113 sshd[5096]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:22.447000 audit[5096]: USER_END pid=5096 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.447000 audit[5096]: CRED_DISP pid=5096 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.32.218:22-139.178.68.195:50546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.32.218:22-139.178.68.195:50542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.452567 systemd[1]: Started sshd@10-137.184.32.218:22-139.178.68.195:50546.service. Aug 13 00:53:22.453622 systemd[1]: sshd@9-137.184.32.218:22-139.178.68.195:50542.service: Deactivated successfully. Aug 13 00:53:22.463856 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:53:22.470512 systemd-logind[1292]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:53:22.488076 systemd-logind[1292]: Removed session 10. Aug 13 00:53:22.548000 audit[5111]: USER_ACCT pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.549500 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 50546 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:22.550000 audit[5111]: CRED_ACQ pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.550000 audit[5111]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7ff57740 a2=3 a3=0 items=0 ppid=1 pid=5111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:22.550000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:22.551128 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:22.562781 systemd[1]: Started session-11.scope. Aug 13 00:53:22.563126 systemd-logind[1292]: New session 11 of user core. Aug 13 00:53:22.603000 audit[5111]: USER_START pid=5111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:22.607000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.020382 kubelet[2099]: I0813 00:53:23.020307 2099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:53:23.179955 systemd[1]: Started sshd@11-137.184.32.218:22-139.178.68.195:50560.service. Aug 13 00:53:23.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.32.218:22-139.178.68.195:50560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.185981 sshd[5111]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:23.206000 audit[5111]: USER_END pid=5111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.206000 audit[5111]: CRED_DISP pid=5111 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.32.218:22-139.178.68.195:50546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.209851 systemd[1]: sshd@10-137.184.32.218:22-139.178.68.195:50546.service: Deactivated successfully. Aug 13 00:53:23.211646 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:53:23.216674 systemd-logind[1292]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:53:23.222467 systemd-logind[1292]: Removed session 11. Aug 13 00:53:23.396000 audit[5122]: USER_ACCT pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.398753 sshd[5122]: Accepted publickey for core from 139.178.68.195 port 50560 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:23.400000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.401000 audit[5122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe98c3a7d0 a2=3 a3=0 items=0 ppid=1 pid=5122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:23.401000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:23.402746 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:23.419736 systemd[1]: Started session-12.scope. Aug 13 00:53:23.422101 systemd-logind[1292]: New session 12 of user core. Aug 13 00:53:23.460000 audit[5122]: USER_START pid=5122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.465000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.710599 sshd[5122]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:23.714000 audit[5122]: USER_END pid=5122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.715000 audit[5122]: CRED_DISP pid=5122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:23.717759 systemd[1]: sshd@11-137.184.32.218:22-139.178.68.195:50560.service: Deactivated successfully. Aug 13 00:53:23.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.32.218:22-139.178.68.195:50560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.719499 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:53:23.719591 systemd-logind[1292]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:53:23.722355 systemd-logind[1292]: Removed session 12. Aug 13 00:53:24.002111 kubelet[2099]: I0813 00:53:23.987451 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c77cb5bfc-pg6q9" podStartSLOduration=39.137007981 podStartE2EDuration="52.953295281s" podCreationTimestamp="2025-08-13 00:52:31 +0000 UTC" firstStartedPulling="2025-08-13 00:53:06.441425492 +0000 UTC m=+50.604116862" lastFinishedPulling="2025-08-13 00:53:20.25771278 +0000 UTC m=+64.420404162" observedRunningTime="2025-08-13 00:53:21.036791344 +0000 UTC m=+65.199482763" watchObservedRunningTime="2025-08-13 00:53:23.953295281 +0000 UTC m=+68.115986669" Aug 13 00:53:24.109000 audit[5137]: NETFILTER_CFG table=filter:120 family=2 entries=13 op=nft_register_rule pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:24.109000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffee0149980 a2=0 a3=7ffee014996c items=0 ppid=2236 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:24.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:24.114000 audit[5137]: NETFILTER_CFG table=nat:121 family=2 entries=27 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:24.116731 kernel: kauditd_printk_skb: 38 callbacks suppressed Aug 13 00:53:24.125514 kernel: audit: type=1325 audit(1755046404.114:477): table=nat:121 family=2 entries=27 op=nft_register_chain pid=5137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:24.126108 kernel: audit: type=1300 audit(1755046404.114:477): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffee0149980 a2=0 a3=7ffee014996c items=0 ppid=2236 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:24.114000 audit[5137]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffee0149980 a2=0 a3=7ffee014996c items=0 ppid=2236 pid=5137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:24.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:24.130475 kernel: audit: type=1327 audit(1755046404.114:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:24.379948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868240161.mount: Deactivated successfully. Aug 13 00:53:24.422551 env[1301]: time="2025-08-13T00:53:24.422481289Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.425659 env[1301]: time="2025-08-13T00:53:24.425600158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.430888 env[1301]: time="2025-08-13T00:53:24.430823562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.439315 env[1301]: time="2025-08-13T00:53:24.439240760Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:24.441646 env[1301]: time="2025-08-13T00:53:24.440592900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:53:24.447502 env[1301]: time="2025-08-13T00:53:24.445274268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:53:24.459560 env[1301]: time="2025-08-13T00:53:24.459500074Z" level=info msg="CreateContainer within sandbox \"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:53:24.482915 env[1301]: time="2025-08-13T00:53:24.482852414Z" level=info msg="CreateContainer within sandbox \"078a0f16af8a2da793dfcba73ac2bd7de30187f2d7f13471c01b32a60884b4f8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7860e2793f9eed626a0e439beeba24e6494e084ab05714f77a00c867edccc749\"" Aug 13 00:53:24.484188 env[1301]: time="2025-08-13T00:53:24.484128879Z" level=info msg="StartContainer for \"7860e2793f9eed626a0e439beeba24e6494e084ab05714f77a00c867edccc749\"" Aug 13 00:53:24.489385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298396192.mount: Deactivated successfully. Aug 13 00:53:24.729857 env[1301]: time="2025-08-13T00:53:24.729697066Z" level=info msg="StartContainer for \"7860e2793f9eed626a0e439beeba24e6494e084ab05714f77a00c867edccc749\" returns successfully" Aug 13 00:53:25.067000 audit[5175]: NETFILTER_CFG table=filter:122 family=2 entries=11 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:25.072475 kernel: audit: type=1325 audit(1755046405.067:478): table=filter:122 family=2 entries=11 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:25.067000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd08195a50 a2=0 a3=7ffd08195a3c items=0 ppid=2236 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:25.080022 kernel: audit: type=1300 audit(1755046405.067:478): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd08195a50 a2=0 a3=7ffd08195a3c items=0 ppid=2236 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:25.080604 kernel: audit: type=1327 audit(1755046405.067:478): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:25.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:25.083000 audit[5175]: NETFILTER_CFG table=nat:123 family=2 entries=29 op=nft_register_chain pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:25.091902 kernel: audit: type=1325 audit(1755046405.083:479): table=nat:123 family=2 entries=29 op=nft_register_chain pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:25.094268 kernel: audit: type=1300 audit(1755046405.083:479): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd08195a50 a2=0 a3=7ffd08195a3c items=0 ppid=2236 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:25.094351 kernel: audit: type=1327 audit(1755046405.083:479): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:25.083000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffd08195a50 a2=0 a3=7ffd08195a3c items=0 ppid=2236 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:25.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:25.379579 systemd[1]: run-containerd-runc-k8s.io-7860e2793f9eed626a0e439beeba24e6494e084ab05714f77a00c867edccc749-runc.s45pYm.mount: Deactivated successfully. Aug 13 00:53:26.588779 env[1301]: time="2025-08-13T00:53:26.588722849Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:26.590659 env[1301]: time="2025-08-13T00:53:26.590618913Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:26.594479 env[1301]: time="2025-08-13T00:53:26.593897748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:26.596482 env[1301]: time="2025-08-13T00:53:26.595985425Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:26.596482 env[1301]: time="2025-08-13T00:53:26.596333130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:53:26.614203 env[1301]: time="2025-08-13T00:53:26.614153210Z" level=info msg="CreateContainer within sandbox \"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:53:26.634407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226127188.mount: Deactivated successfully. Aug 13 00:53:26.645188 env[1301]: time="2025-08-13T00:53:26.644970361Z" level=info msg="CreateContainer within sandbox \"389a3f1f48a1e9579dc8eb445e598fbda294d4b74fbac543c803fefc787906a7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef18ff30e8d20ba856bc823c25540f4ae3a76834a8268abc0911a3e27d0b6b0a\"" Aug 13 00:53:26.649180 env[1301]: time="2025-08-13T00:53:26.649000847Z" level=info msg="StartContainer for \"ef18ff30e8d20ba856bc823c25540f4ae3a76834a8268abc0911a3e27d0b6b0a\"" Aug 13 00:53:26.745943 env[1301]: time="2025-08-13T00:53:26.745866197Z" level=info msg="StartContainer for \"ef18ff30e8d20ba856bc823c25540f4ae3a76834a8268abc0911a3e27d0b6b0a\" returns successfully" Aug 13 00:53:27.069787 kubelet[2099]: I0813 00:53:27.069679 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n8qkv" podStartSLOduration=29.88961241 podStartE2EDuration="50.069638885s" podCreationTimestamp="2025-08-13 00:52:37 +0000 UTC" firstStartedPulling="2025-08-13 00:53:06.419116157 +0000 UTC m=+50.581807535" lastFinishedPulling="2025-08-13 00:53:26.599142627 +0000 UTC m=+70.761834010" observedRunningTime="2025-08-13 00:53:27.063538021 +0000 UTC m=+71.226229421" watchObservedRunningTime="2025-08-13 00:53:27.069638885 +0000 UTC m=+71.232330317" Aug 13 00:53:27.071047 kubelet[2099]: I0813 00:53:27.070192 2099 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78cf69b664-bbxlk" podStartSLOduration=6.014668385 podStartE2EDuration="27.070175532s" podCreationTimestamp="2025-08-13 00:53:00 +0000 UTC" firstStartedPulling="2025-08-13 00:53:03.387607974 +0000 UTC m=+47.550299342" lastFinishedPulling="2025-08-13 00:53:24.443115108 +0000 UTC m=+68.605806489" observedRunningTime="2025-08-13 00:53:25.030720132 +0000 UTC m=+69.193411555" watchObservedRunningTime="2025-08-13 00:53:27.070175532 +0000 UTC m=+71.232866922" Aug 13 00:53:27.504069 kubelet[2099]: I0813 00:53:27.499368 2099 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:53:27.515230 kubelet[2099]: I0813 00:53:27.515182 2099 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:53:28.718582 systemd[1]: Started sshd@12-137.184.32.218:22-139.178.68.195:50572.service. Aug 13 00:53:28.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.32.218:22-139.178.68.195:50572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:28.726260 kernel: audit: type=1130 audit(1755046408.720:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.32.218:22-139.178.68.195:50572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:28.884000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:28.886537 sshd[5225]: Accepted publickey for core from 139.178.68.195 port 50572 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:28.887000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:28.888000 audit[5225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc8ae5af0 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:28.888000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:28.891275 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:28.919526 systemd-logind[1292]: New session 13 of user core. Aug 13 00:53:28.922686 systemd[1]: Started session-13.scope. Aug 13 00:53:28.965000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:28.971000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:29.635151 sshd[5225]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:29.642276 kernel: kauditd_printk_skb: 7 callbacks suppressed Aug 13 00:53:29.644759 kernel: audit: type=1106 audit(1755046409.637:486): pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:29.637000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:29.639694 systemd[1]: sshd@12-137.184.32.218:22-139.178.68.195:50572.service: Deactivated successfully. Aug 13 00:53:29.640762 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:53:29.642780 systemd-logind[1292]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:53:29.644276 systemd-logind[1292]: Removed session 13. Aug 13 00:53:29.637000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:29.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.32.218:22-139.178.68.195:50572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:29.652844 kernel: audit: type=1104 audit(1755046409.637:487): pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:29.653013 kernel: audit: type=1131 audit(1755046409.637:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.32.218:22-139.178.68.195:50572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.205831 kubelet[2099]: E0813 00:53:31.205759 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:32.809108 kubelet[2099]: I0813 00:53:32.809040 2099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:53:32.955000 audit[5239]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:32.955000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd17602ef0 a2=0 a3=7ffd17602edc items=0 ppid=2236 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:32.966043 kernel: audit: type=1325 audit(1755046412.955:489): table=filter:124 family=2 entries=10 op=nft_register_rule pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:32.966553 kernel: audit: type=1300 audit(1755046412.955:489): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd17602ef0 a2=0 a3=7ffd17602edc items=0 ppid=2236 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:32.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:32.969570 kernel: audit: type=1327 audit(1755046412.955:489): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:32.971000 audit[5239]: NETFILTER_CFG table=nat:125 family=2 entries=36 op=nft_register_chain pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:32.971000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffd17602ef0 a2=0 a3=7ffd17602edc items=0 ppid=2236 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:32.979479 kernel: audit: type=1325 audit(1755046412.971:490): table=nat:125 family=2 entries=36 op=nft_register_chain pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:32.979600 kernel: audit: type=1300 audit(1755046412.971:490): arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffd17602ef0 a2=0 a3=7ffd17602edc items=0 ppid=2236 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:32.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:32.982477 kernel: audit: type=1327 audit(1755046412.971:490): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:34.642681 systemd[1]: Started sshd@13-137.184.32.218:22-139.178.68.195:57068.service. Aug 13 00:53:34.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.32.218:22-139.178.68.195:57068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:34.648630 kernel: audit: type=1130 audit(1755046414.642:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.32.218:22-139.178.68.195:57068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:34.770000 audit[5240]: USER_ACCT pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.778339 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 57068 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:34.778964 kernel: audit: type=1101 audit(1755046414.770:492): pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.777000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.786974 kernel: audit: type=1103 audit(1755046414.777:493): pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.787186 kernel: audit: type=1006 audit(1755046414.777:494): pid=5240 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Aug 13 00:53:34.777000 audit[5240]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe86467760 a2=3 a3=0 items=0 ppid=1 pid=5240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:34.791674 kernel: audit: type=1300 audit(1755046414.777:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe86467760 a2=3 a3=0 items=0 ppid=1 pid=5240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:34.777000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:34.794049 kernel: audit: type=1327 audit(1755046414.777:494): proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:34.794699 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:34.805687 systemd[1]: Started session-14.scope. Aug 13 00:53:34.807060 systemd-logind[1292]: New session 14 of user core. Aug 13 00:53:34.830000 audit[5240]: USER_START pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.837803 kernel: audit: type=1105 audit(1755046414.830:495): pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.838000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:34.844481 kernel: audit: type=1103 audit(1755046414.838:496): pid=5243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:35.431847 sshd[5240]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:35.436000 audit[5240]: USER_END pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:35.441498 kernel: audit: type=1106 audit(1755046415.436:497): pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:35.440000 audit[5240]: CRED_DISP pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:35.445521 kernel: audit: type=1104 audit(1755046415.440:498): pid=5240 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:35.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.32.218:22-139.178.68.195:57068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:35.453709 systemd[1]: sshd@13-137.184.32.218:22-139.178.68.195:57068.service: Deactivated successfully. Aug 13 00:53:35.456780 systemd-logind[1292]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:53:35.458638 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:53:35.460743 systemd-logind[1292]: Removed session 14. Aug 13 00:53:38.203281 kubelet[2099]: E0813 00:53:38.203217 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:40.439311 systemd[1]: Started sshd@14-137.184.32.218:22-139.178.68.195:51052.service. Aug 13 00:53:40.444435 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:53:40.444719 kernel: audit: type=1130 audit(1755046420.438:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.32.218:22-139.178.68.195:51052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:40.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.32.218:22-139.178.68.195:51052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:40.568000 audit[5253]: USER_ACCT pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.571758 sshd[5253]: Accepted publickey for core from 139.178.68.195 port 51052 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:40.573617 kernel: audit: type=1101 audit(1755046420.568:501): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.579653 kernel: audit: type=1103 audit(1755046420.573:502): pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.573000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.578000 audit[5253]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca6d06260 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:40.584620 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:40.586588 kernel: audit: type=1006 audit(1755046420.578:503): pid=5253 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Aug 13 00:53:40.586699 kernel: audit: type=1300 audit(1755046420.578:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca6d06260 a2=3 a3=0 items=0 ppid=1 pid=5253 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:40.578000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:40.588176 kernel: audit: type=1327 audit(1755046420.578:503): proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:40.595599 systemd-logind[1292]: New session 15 of user core. Aug 13 00:53:40.596105 systemd[1]: Started session-15.scope. Aug 13 00:53:40.611000 audit[5253]: USER_START pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.619526 kernel: audit: type=1105 audit(1755046420.611:504): pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.619000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:40.625550 kernel: audit: type=1103 audit(1755046420.619:505): pid=5256 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:41.206092 kubelet[2099]: E0813 00:53:41.205659 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:41.334223 sshd[5253]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:41.335000 audit[5253]: USER_END pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:41.341490 kernel: audit: type=1106 audit(1755046421.335:506): pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:41.341971 systemd[1]: sshd@14-137.184.32.218:22-139.178.68.195:51052.service: Deactivated successfully. Aug 13 00:53:41.343989 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:53:41.336000 audit[5253]: CRED_DISP pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:41.344548 systemd-logind[1292]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:53:41.350023 kernel: audit: type=1104 audit(1755046421.336:507): pid=5253 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:41.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.32.218:22-139.178.68.195:51052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:41.351521 systemd-logind[1292]: Removed session 15. Aug 13 00:53:46.340254 systemd[1]: Started sshd@15-137.184.32.218:22-139.178.68.195:51064.service. Aug 13 00:53:46.346746 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:53:46.346954 kernel: audit: type=1130 audit(1755046426.340:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.32.218:22-139.178.68.195:51064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:46.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.32.218:22-139.178.68.195:51064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:46.511694 sshd[5266]: Accepted publickey for core from 139.178.68.195 port 51064 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:46.516573 kernel: audit: type=1101 audit(1755046426.510:510): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.510000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.518216 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:46.516000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.523549 kernel: audit: type=1103 audit(1755046426.516:511): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.540034 kernel: audit: type=1006 audit(1755046426.516:512): pid=5266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 00:53:46.540646 kernel: audit: type=1300 audit(1755046426.516:512): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef98e5570 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:46.540727 kernel: audit: type=1327 audit(1755046426.516:512): proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:46.516000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef98e5570 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:46.516000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:46.546489 systemd-logind[1292]: New session 16 of user core. Aug 13 00:53:46.547754 systemd[1]: Started session-16.scope. Aug 13 00:53:46.583212 kernel: audit: type=1105 audit(1755046426.567:513): pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.567000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.585000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:46.597625 kernel: audit: type=1103 audit(1755046426.585:514): pid=5269 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.013725 sshd[5266]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:47.021241 systemd[1]: Started sshd@16-137.184.32.218:22-139.178.68.195:51070.service. Aug 13 00:53:47.021000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.027674 kernel: audit: type=1106 audit(1755046427.021:515): pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.031878 kernel: audit: type=1104 audit(1755046427.021:516): pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.021000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.033856 systemd[1]: sshd@15-137.184.32.218:22-139.178.68.195:51064.service: Deactivated successfully. Aug 13 00:53:47.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.32.218:22-139.178.68.195:51070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.32.218:22-139.178.68.195:51064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.043858 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:53:47.044724 systemd-logind[1292]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:53:47.050192 systemd-logind[1292]: Removed session 16. Aug 13 00:53:47.129000 audit[5285]: USER_ACCT pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.132735 sshd[5285]: Accepted publickey for core from 139.178.68.195 port 51070 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:47.133000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.133000 audit[5285]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5d7ea0d0 a2=3 a3=0 items=0 ppid=1 pid=5285 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:47.133000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:47.138337 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:47.151672 systemd-logind[1292]: New session 17 of user core. Aug 13 00:53:47.152807 systemd[1]: Started session-17.scope. Aug 13 00:53:47.171000 audit[5285]: USER_START pid=5285 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.173000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.679741 sshd[5285]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:47.683000 audit[5285]: USER_END pid=5285 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.683000 audit[5285]: CRED_DISP pid=5285 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.689891 systemd[1]: Started sshd@17-137.184.32.218:22-139.178.68.195:51074.service. Aug 13 00:53:47.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.32.218:22-139.178.68.195:51074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.694328 systemd[1]: sshd@16-137.184.32.218:22-139.178.68.195:51070.service: Deactivated successfully. Aug 13 00:53:47.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.32.218:22-139.178.68.195:51070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.714516 systemd-logind[1292]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:53:47.714731 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:53:47.721562 systemd-logind[1292]: Removed session 17. Aug 13 00:53:47.779000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.781002 sshd[5297]: Accepted publickey for core from 139.178.68.195 port 51074 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:47.785000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.785000 audit[5297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc39619a00 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:47.785000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:47.788258 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:47.799544 systemd-logind[1292]: New session 18 of user core. Aug 13 00:53:47.800693 systemd[1]: Started session-18.scope. Aug 13 00:53:47.811000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:47.816000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:48.403412 systemd[1]: run-containerd-runc-k8s.io-1aa9c32ae5b5f8e45ef45b46e6a809232e6a6a7abde8524fefb2daa5035f5887-runc.Clb5nx.mount: Deactivated successfully. Aug 13 00:53:53.533259 sshd[5297]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:53.621973 kernel: kauditd_printk_skb: 20 callbacks suppressed Aug 13 00:53:53.622219 kernel: audit: type=1106 audit(1755046433.594:533): pid=5297 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:53.622698 kernel: audit: type=1130 audit(1755046433.596:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.32.218:22-139.178.68.195:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.622770 kernel: audit: type=1104 audit(1755046433.598:535): pid=5297 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:53.622799 kernel: audit: type=1131 audit(1755046433.601:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.32.218:22-139.178.68.195:51074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.594000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:53.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.32.218:22-139.178.68.195:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.598000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:53.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.32.218:22-139.178.68.195:51074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.597466 systemd[1]: Started sshd@18-137.184.32.218:22-139.178.68.195:34002.service. Aug 13 00:53:53.602325 systemd[1]: sshd@17-137.184.32.218:22-139.178.68.195:51074.service: Deactivated successfully. Aug 13 00:53:53.603905 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:53:53.622915 systemd-logind[1292]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:53:53.648504 systemd-logind[1292]: Removed session 18. Aug 13 00:53:53.901000 audit[5398]: NETFILTER_CFG table=filter:126 family=2 entries=22 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.901000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffdfc781660 a2=0 a3=7ffdfc78164c items=0 ppid=2236 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.929928 kernel: audit: type=1325 audit(1755046433.901:537): table=filter:126 family=2 entries=22 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.930698 kernel: audit: type=1300 audit(1755046433.901:537): arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffdfc781660 a2=0 a3=7ffdfc78164c items=0 ppid=2236 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.930772 kernel: audit: type=1327 audit(1755046433.901:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:53.930800 kernel: audit: type=1325 audit(1755046433.920:538): table=nat:127 family=2 entries=24 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.931196 kernel: audit: type=1300 audit(1755046433.920:538): arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffdfc781660 a2=0 a3=0 items=0 ppid=2236 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:53.920000 audit[5398]: NETFILTER_CFG table=nat:127 family=2 entries=24 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.920000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffdfc781660 a2=0 a3=0 items=0 ppid=2236 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:53.934566 kernel: audit: type=1327 audit(1755046433.920:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:53.970000 audit[5400]: NETFILTER_CFG table=filter:128 family=2 entries=34 op=nft_register_rule pid=5400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.970000 audit[5400]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffc02ac2bd0 a2=0 a3=7ffc02ac2bbc items=0 ppid=2236 pid=5400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:53.976000 audit[5400]: NETFILTER_CFG table=nat:129 family=2 entries=24 op=nft_register_rule pid=5400 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:53.976000 audit[5400]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffc02ac2bd0 a2=0 a3=0 items=0 ppid=2236 pid=5400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:54.026000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:54.029033 sshd[5393]: Accepted publickey for core from 139.178.68.195 port 34002 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:53:54.030000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:54.030000 audit[5393]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce2b92ce0 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:54.030000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:53:54.034413 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:54.060554 systemd[1]: Started session-19.scope. Aug 13 00:53:54.062699 systemd-logind[1292]: New session 19 of user core. Aug 13 00:53:54.094000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:54.098000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:53:58.677185 kubelet[2099]: E0813 00:53:58.659252 2099 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.102s" Aug 13 00:53:59.231522 kubelet[2099]: E0813 00:53:59.231411 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:53:59.786000 audit[5409]: NETFILTER_CFG table=filter:130 family=2 entries=33 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:59.803754 kernel: kauditd_printk_skb: 13 callbacks suppressed Aug 13 00:53:59.810795 kernel: audit: type=1325 audit(1755046439.786:546): table=filter:130 family=2 entries=33 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:59.812315 kernel: audit: type=1300 audit(1755046439.786:546): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff1c42bc70 a2=0 a3=7fff1c42bc5c items=0 ppid=2236 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.813573 kernel: audit: type=1327 audit(1755046439.786:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:59.786000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff1c42bc70 a2=0 a3=7fff1c42bc5c items=0 ppid=2236 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:59.820000 audit[5409]: NETFILTER_CFG table=nat:131 family=2 entries=31 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:59.820000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff1c42bc70 a2=0 a3=7fff1c42bc5c items=0 ppid=2236 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.850710 kernel: audit: type=1325 audit(1755046439.820:547): table=nat:131 family=2 entries=31 op=nft_register_chain pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:53:59.851008 kernel: audit: type=1300 audit(1755046439.820:547): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff1c42bc70 a2=0 a3=7fff1c42bc5c items=0 ppid=2236 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:59.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:53:59.856532 kernel: audit: type=1327 audit(1755046439.820:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:00.637662 sshd[5393]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:00.689000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.708600 kernel: audit: type=1106 audit(1755046440.689:548): pid=5393 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.710095 kernel: audit: type=1104 audit(1755046440.697:549): pid=5393 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.697000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.32.218:22-139.178.68.195:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:00.735895 kernel: audit: type=1130 audit(1755046440.718:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.32.218:22-139.178.68.195:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:00.742819 kernel: audit: type=1131 audit(1755046440.727:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.32.218:22-139.178.68.195:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:00.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.32.218:22-139.178.68.195:34002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:00.719544 systemd[1]: Started sshd@19-137.184.32.218:22-139.178.68.195:58758.service. Aug 13 00:54:00.728108 systemd[1]: sshd@18-137.184.32.218:22-139.178.68.195:34002.service: Deactivated successfully. Aug 13 00:54:00.730467 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:54:00.742820 systemd-logind[1292]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:54:00.757798 systemd-logind[1292]: Removed session 19. Aug 13 00:54:00.902000 audit[5415]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:00.902000 audit[5415]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe82345140 a2=0 a3=7ffe8234512c items=0 ppid=2236 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:00.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:00.933000 audit[5415]: NETFILTER_CFG table=nat:133 family=2 entries=110 op=nft_register_chain pid=5415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 00:54:00.933000 audit[5415]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffe82345140 a2=0 a3=7ffe8234512c items=0 ppid=2236 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:00.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 00:54:00.951000 audit[5410]: USER_ACCT pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.953000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:00.953000 audit[5410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd852acbc0 a2=3 a3=0 items=0 ppid=1 pid=5410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:00.953000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:00.956787 sshd[5410]: Accepted publickey for core from 139.178.68.195 port 58758 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:00.956893 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:00.977953 systemd[1]: Started session-20.scope. Aug 13 00:54:00.979730 systemd-logind[1292]: New session 20 of user core. Aug 13 00:54:01.004000 audit[5410]: USER_START pid=5410 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:01.013000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:02.692527 sshd[5410]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:02.706000 audit[5410]: USER_END pid=5410 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:02.708000 audit[5410]: CRED_DISP pid=5410 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:02.724547 systemd[1]: sshd@19-137.184.32.218:22-139.178.68.195:58758.service: Deactivated successfully. Aug 13 00:54:02.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.32.218:22-139.178.68.195:58758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.728030 systemd-logind[1292]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:54:02.728060 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:54:02.731059 systemd-logind[1292]: Removed session 20. Aug 13 00:54:07.701639 systemd[1]: Started sshd@20-137.184.32.218:22-139.178.68.195:58766.service. Aug 13 00:54:07.712680 kernel: kauditd_printk_skb: 16 callbacks suppressed Aug 13 00:54:07.714039 kernel: audit: type=1130 audit(1755046447.700:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.32.218:22-139.178.68.195:58766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.32.218:22-139.178.68.195:58766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.798865 sshd[5427]: Accepted publickey for core from 139.178.68.195 port 58766 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:07.806537 kernel: audit: type=1101 audit(1755046447.797:563): pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.806988 kernel: audit: type=1103 audit(1755046447.801:564): pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.797000 audit[5427]: USER_ACCT pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.801000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.808654 sshd[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:07.813667 kernel: audit: type=1006 audit(1755046447.801:565): pid=5427 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Aug 13 00:54:07.801000 audit[5427]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe943d1d00 a2=3 a3=0 items=0 ppid=1 pid=5427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:07.827696 kernel: audit: type=1300 audit(1755046447.801:565): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe943d1d00 a2=3 a3=0 items=0 ppid=1 pid=5427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:07.827809 kernel: audit: type=1327 audit(1755046447.801:565): proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:07.801000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:07.833310 systemd-logind[1292]: New session 21 of user core. Aug 13 00:54:07.834578 systemd[1]: Started session-21.scope. Aug 13 00:54:07.849000 audit[5427]: USER_START pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.858826 kernel: audit: type=1105 audit(1755046447.849:566): pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.858924 kernel: audit: type=1103 audit(1755046447.853:567): pid=5430 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:07.853000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:08.374364 sshd[5427]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:08.376000 audit[5427]: USER_END pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:08.383745 kernel: audit: type=1106 audit(1755046448.376:568): pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:08.380000 audit[5427]: CRED_DISP pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:08.389696 kernel: audit: type=1104 audit(1755046448.380:569): pid=5427 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:08.399080 systemd[1]: sshd@20-137.184.32.218:22-139.178.68.195:58766.service: Deactivated successfully. Aug 13 00:54:08.400238 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:54:08.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.32.218:22-139.178.68.195:58766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:08.401726 systemd-logind[1292]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:54:08.404477 systemd-logind[1292]: Removed session 21. Aug 13 00:54:13.385862 systemd[1]: Started sshd@21-137.184.32.218:22-139.178.68.195:38456.service. Aug 13 00:54:13.396571 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:54:13.397756 kernel: audit: type=1130 audit(1755046453.386:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.32.218:22-139.178.68.195:38456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.32.218:22-139.178.68.195:38456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.561000 audit[5460]: USER_ACCT pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.567973 kernel: audit: type=1101 audit(1755046453.561:572): pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.568068 sshd[5460]: Accepted publickey for core from 139.178.68.195 port 38456 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:13.573000 audit[5460]: CRED_ACQ pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.582945 kernel: audit: type=1103 audit(1755046453.573:573): pid=5460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.583160 kernel: audit: type=1006 audit(1755046453.573:574): pid=5460 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Aug 13 00:54:13.584036 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:13.573000 audit[5460]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb6167a90 a2=3 a3=0 items=0 ppid=1 pid=5460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:13.600827 kernel: audit: type=1300 audit(1755046453.573:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb6167a90 a2=3 a3=0 items=0 ppid=1 pid=5460 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:13.573000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:13.620554 kernel: audit: type=1327 audit(1755046453.573:574): proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:13.620795 systemd[1]: Started session-22.scope. Aug 13 00:54:13.622190 systemd-logind[1292]: New session 22 of user core. Aug 13 00:54:13.646860 kernel: audit: type=1105 audit(1755046453.638:575): pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.638000 audit[5460]: USER_START pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.653539 kernel: audit: type=1103 audit(1755046453.646:576): pid=5463 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.646000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:13.779342 systemd[1]: run-containerd-runc-k8s.io-d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11-runc.LYHJt3.mount: Deactivated successfully. Aug 13 00:54:14.649267 sshd[5460]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:14.651000 audit[5460]: USER_END pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:14.658508 kernel: audit: type=1106 audit(1755046454.651:577): pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:14.657000 audit[5460]: CRED_DISP pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:14.663652 kernel: audit: type=1104 audit(1755046454.657:578): pid=5460 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:14.679268 systemd-logind[1292]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:54:14.679335 systemd[1]: sshd@21-137.184.32.218:22-139.178.68.195:38456.service: Deactivated successfully. Aug 13 00:54:14.680608 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:54:14.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.32.218:22-139.178.68.195:38456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.682739 systemd-logind[1292]: Removed session 22. Aug 13 00:54:18.116062 systemd[1]: run-containerd-runc-k8s.io-d8ff7655c1fe74ba438bc8dc8621e741e88d5a975e0a58092c9511c8b18d0d11-runc.amSM2f.mount: Deactivated successfully. Aug 13 00:54:18.409153 systemd[1]: run-containerd-runc-k8s.io-1aa9c32ae5b5f8e45ef45b46e6a809232e6a6a7abde8524fefb2daa5035f5887-runc.kBZGWo.mount: Deactivated successfully. Aug 13 00:54:19.103340 systemd[1]: run-containerd-runc-k8s.io-22e24005876719f178c014506eac38206078b50817235964f516db1f32905e74-runc.Um1ZjU.mount: Deactivated successfully. Aug 13 00:54:19.659899 systemd[1]: Started sshd@22-137.184.32.218:22-139.178.68.195:38470.service. Aug 13 00:54:19.677986 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:54:19.679553 kernel: audit: type=1130 audit(1755046459.660:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.32.218:22-139.178.68.195:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:19.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.32.218:22-139.178.68.195:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:19.871000 audit[5551]: USER_ACCT pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.880071 kernel: audit: type=1101 audit(1755046459.871:581): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.880191 sshd[5551]: Accepted publickey for core from 139.178.68.195 port 38470 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:19.889320 kernel: audit: type=1103 audit(1755046459.879:582): pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.889482 kernel: audit: type=1006 audit(1755046459.879:583): pid=5551 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Aug 13 00:54:19.879000 audit[5551]: CRED_ACQ pid=5551 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.890782 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:19.898912 kernel: audit: type=1300 audit(1755046459.879:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd810f110 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:19.879000 audit[5551]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd810f110 a2=3 a3=0 items=0 ppid=1 pid=5551 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:19.879000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:19.909516 kernel: audit: type=1327 audit(1755046459.879:583): proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:19.925569 systemd-logind[1292]: New session 23 of user core. Aug 13 00:54:19.926416 systemd[1]: Started session-23.scope. Aug 13 00:54:19.962000 audit[5551]: USER_START pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.968553 kernel: audit: type=1105 audit(1755046459.962:584): pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.973511 kernel: audit: type=1103 audit(1755046459.967:585): pid=5554 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:19.967000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:20.942778 sshd[5551]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:20.946000 audit[5551]: USER_END pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:20.955711 kernel: audit: type=1106 audit(1755046460.946:586): pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:20.955932 kernel: audit: type=1104 audit(1755046460.946:587): pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:20.946000 audit[5551]: CRED_DISP pid=5551 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:20.956958 systemd-logind[1292]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:54:20.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.32.218:22-139.178.68.195:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:20.958295 systemd[1]: sshd@22-137.184.32.218:22-139.178.68.195:38470.service: Deactivated successfully. Aug 13 00:54:20.959728 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:54:20.961537 systemd-logind[1292]: Removed session 23. Aug 13 00:54:25.426184 kubelet[2099]: E0813 00:54:25.426086 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:25.958242 systemd[1]: Started sshd@23-137.184.32.218:22-139.178.68.195:37940.service. Aug 13 00:54:25.963755 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:54:25.970989 kernel: audit: type=1130 audit(1755046465.959:589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.32.218:22-139.178.68.195:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:25.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.32.218:22-139.178.68.195:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.112000 audit[5566]: USER_ACCT pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.116852 sshd[5566]: Accepted publickey for core from 139.178.68.195 port 37940 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:26.118239 kernel: audit: type=1101 audit(1755046466.112:590): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.118324 kernel: audit: type=1103 audit(1755046466.117:591): pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.117000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.119879 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:26.127587 kernel: audit: type=1006 audit(1755046466.117:592): pid=5566 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Aug 13 00:54:26.117000 audit[5566]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd5f87800 a2=3 a3=0 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:26.134508 kernel: audit: type=1300 audit(1755046466.117:592): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd5f87800 a2=3 a3=0 items=0 ppid=1 pid=5566 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:26.140141 systemd[1]: Started session-24.scope. Aug 13 00:54:26.140550 systemd-logind[1292]: New session 24 of user core. Aug 13 00:54:26.117000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:26.148505 kernel: audit: type=1327 audit(1755046466.117:592): proctitle=737368643A20636F7265205B707269765D Aug 13 00:54:26.146000 audit[5566]: USER_START pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.162395 kernel: audit: type=1105 audit(1755046466.146:593): pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.162543 kernel: audit: type=1103 audit(1755046466.149:594): pid=5569 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.149000 audit[5569]: CRED_ACQ pid=5569 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.948199 sshd[5566]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:26.950000 audit[5566]: USER_END pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.950000 audit[5566]: CRED_DISP pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.958163 kernel: audit: type=1106 audit(1755046466.950:595): pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.958355 kernel: audit: type=1104 audit(1755046466.950:596): pid=5566 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 00:54:26.959133 systemd[1]: sshd@23-137.184.32.218:22-139.178.68.195:37940.service: Deactivated successfully. Aug 13 00:54:26.961549 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:54:26.962205 systemd-logind[1292]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:54:26.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.32.218:22-139.178.68.195:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:26.963319 systemd-logind[1292]: Removed session 24. Aug 13 00:54:29.202144 kubelet[2099]: E0813 00:54:29.202097 2099 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"