May 13 08:23:05.900343 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 08:23:05.900365 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:05.900375 kernel: BIOS-provided physical RAM map: May 13 08:23:05.900384 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 08:23:05.900391 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 08:23:05.900398 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 08:23:05.900406 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 08:23:05.900413 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 08:23:05.900419 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 08:23:05.900426 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 08:23:05.900433 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 08:23:05.900439 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 08:23:05.900448 kernel: NX (Execute Disable) protection: active May 13 08:23:05.900454 kernel: SMBIOS 3.0.0 present. May 13 08:23:05.900463 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 08:23:05.900470 kernel: Hypervisor detected: KVM May 13 08:23:05.900477 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 08:23:05.900484 kernel: kvm-clock: cpu 0, msr 41196001, primary cpu clock May 13 08:23:05.900493 kernel: kvm-clock: using sched offset of 3869470228 cycles May 13 08:23:05.900501 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 08:23:05.900509 kernel: tsc: Detected 1996.249 MHz processor May 13 08:23:05.900516 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 08:23:05.900524 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 08:23:05.900532 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 08:23:05.900539 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 08:23:05.900547 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 08:23:05.900554 kernel: ACPI: Early table checksum verification disabled May 13 08:23:05.900563 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 08:23:05.900570 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:05.905646 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:05.905663 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:05.905672 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 08:23:05.905680 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:05.905687 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:05.905695 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 08:23:05.905706 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 08:23:05.905714 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 08:23:05.905721 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 08:23:05.905728 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 08:23:05.905736 kernel: No NUMA configuration found May 13 08:23:05.905747 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 08:23:05.905754 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 08:23:05.905763 kernel: Zone ranges: May 13 08:23:05.905771 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 08:23:05.905779 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 08:23:05.905787 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 08:23:05.905795 kernel: Movable zone start for each node May 13 08:23:05.905802 kernel: Early memory node ranges May 13 08:23:05.905810 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 08:23:05.905818 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 08:23:05.905827 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 08:23:05.905835 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 08:23:05.905842 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 08:23:05.905850 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 08:23:05.905858 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 08:23:05.905866 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 08:23:05.905873 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 08:23:05.905881 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 08:23:05.905889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 08:23:05.905899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 08:23:05.905906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 08:23:05.905914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 08:23:05.905922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 08:23:05.905929 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 08:23:05.905937 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 08:23:05.905945 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 08:23:05.905952 kernel: Booting paravirtualized kernel on KVM May 13 08:23:05.905960 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 08:23:05.905970 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 13 08:23:05.905978 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 13 08:23:05.905985 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 13 08:23:05.905993 kernel: pcpu-alloc: [0] 0 1 May 13 08:23:05.906001 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 13 08:23:05.906009 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 08:23:05.906016 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 08:23:05.906024 kernel: Policy zone: Normal May 13 08:23:05.906033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:05.906043 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 08:23:05.906051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 08:23:05.906058 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 08:23:05.906066 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 08:23:05.906074 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225236K reserved, 0K cma-reserved) May 13 08:23:05.906082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 08:23:05.906089 kernel: ftrace: allocating 34584 entries in 136 pages May 13 08:23:05.906097 kernel: ftrace: allocated 136 pages with 2 groups May 13 08:23:05.906106 kernel: rcu: Hierarchical RCU implementation. May 13 08:23:05.906115 kernel: rcu: RCU event tracing is enabled. May 13 08:23:05.906123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 08:23:05.906131 kernel: Rude variant of Tasks RCU enabled. May 13 08:23:05.906138 kernel: Tracing variant of Tasks RCU enabled. May 13 08:23:05.906146 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 08:23:05.906154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 08:23:05.906162 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 08:23:05.906170 kernel: Console: colour VGA+ 80x25 May 13 08:23:05.906179 kernel: printk: console [tty0] enabled May 13 08:23:05.906187 kernel: printk: console [ttyS0] enabled May 13 08:23:05.906194 kernel: ACPI: Core revision 20210730 May 13 08:23:05.906202 kernel: APIC: Switch to symmetric I/O mode setup May 13 08:23:05.906210 kernel: x2apic enabled May 13 08:23:05.906218 kernel: Switched APIC routing to physical x2apic. May 13 08:23:05.906225 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 08:23:05.906233 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 08:23:05.906241 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 08:23:05.906251 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 08:23:05.906258 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 08:23:05.906266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 08:23:05.906274 kernel: Spectre V2 : Mitigation: Retpolines May 13 08:23:05.906281 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 08:23:05.906289 kernel: Speculative Store Bypass: Vulnerable May 13 08:23:05.906297 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 08:23:05.906304 kernel: Freeing SMP alternatives memory: 32K May 13 08:23:05.906312 kernel: pid_max: default: 32768 minimum: 301 May 13 08:23:05.906321 kernel: LSM: Security Framework initializing May 13 08:23:05.906328 kernel: SELinux: Initializing. May 13 08:23:05.906336 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:23:05.906344 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:23:05.906352 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 08:23:05.906360 kernel: Performance Events: AMD PMU driver. May 13 08:23:05.906373 kernel: ... version: 0 May 13 08:23:05.906382 kernel: ... bit width: 48 May 13 08:23:05.906390 kernel: ... generic registers: 4 May 13 08:23:05.906399 kernel: ... value mask: 0000ffffffffffff May 13 08:23:05.906407 kernel: ... max period: 00007fffffffffff May 13 08:23:05.906414 kernel: ... fixed-purpose events: 0 May 13 08:23:05.906424 kernel: ... event mask: 000000000000000f May 13 08:23:05.906432 kernel: signal: max sigframe size: 1440 May 13 08:23:05.906440 kernel: rcu: Hierarchical SRCU implementation. May 13 08:23:05.906448 kernel: smp: Bringing up secondary CPUs ... May 13 08:23:05.906456 kernel: x86: Booting SMP configuration: May 13 08:23:05.906466 kernel: .... node #0, CPUs: #1 May 13 08:23:05.906474 kernel: kvm-clock: cpu 1, msr 41196041, secondary cpu clock May 13 08:23:05.906482 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 13 08:23:05.906490 kernel: smp: Brought up 1 node, 2 CPUs May 13 08:23:05.906498 kernel: smpboot: Max logical packages: 2 May 13 08:23:05.906506 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 08:23:05.906514 kernel: devtmpfs: initialized May 13 08:23:05.906522 kernel: x86/mm: Memory block size: 128MB May 13 08:23:05.906530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 08:23:05.906540 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 08:23:05.906548 kernel: pinctrl core: initialized pinctrl subsystem May 13 08:23:05.906556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 08:23:05.906564 kernel: audit: initializing netlink subsys (disabled) May 13 08:23:05.906584 kernel: audit: type=2000 audit(1747124584.970:1): state=initialized audit_enabled=0 res=1 May 13 08:23:05.906593 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 08:23:05.906601 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 08:23:05.906609 kernel: cpuidle: using governor menu May 13 08:23:05.906617 kernel: ACPI: bus type PCI registered May 13 08:23:05.906627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 08:23:05.906635 kernel: dca service started, version 1.12.1 May 13 08:23:05.906643 kernel: PCI: Using configuration type 1 for base access May 13 08:23:05.906651 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 08:23:05.906659 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 08:23:05.906667 kernel: ACPI: Added _OSI(Module Device) May 13 08:23:05.906675 kernel: ACPI: Added _OSI(Processor Device) May 13 08:23:05.906683 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 08:23:05.906691 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 08:23:05.906701 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 08:23:05.906709 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 08:23:05.906717 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 08:23:05.906725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 08:23:05.906733 kernel: ACPI: Interpreter enabled May 13 08:23:05.906741 kernel: ACPI: PM: (supports S0 S3 S5) May 13 08:23:05.906749 kernel: ACPI: Using IOAPIC for interrupt routing May 13 08:23:05.906757 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 08:23:05.906765 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 08:23:05.906775 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 08:23:05.906910 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 08:23:05.907012 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 13 08:23:05.907026 kernel: acpiphp: Slot [3] registered May 13 08:23:05.907034 kernel: acpiphp: Slot [4] registered May 13 08:23:05.907042 kernel: acpiphp: Slot [5] registered May 13 08:23:05.907050 kernel: acpiphp: Slot [6] registered May 13 08:23:05.907058 kernel: acpiphp: Slot [7] registered May 13 08:23:05.907068 kernel: acpiphp: Slot [8] registered May 13 08:23:05.907076 kernel: acpiphp: Slot [9] registered May 13 08:23:05.907084 kernel: acpiphp: Slot [10] registered May 13 08:23:05.907092 kernel: acpiphp: Slot [11] registered May 13 08:23:05.907100 kernel: acpiphp: Slot [12] registered May 13 08:23:05.907108 kernel: acpiphp: Slot [13] registered May 13 08:23:05.907116 kernel: acpiphp: Slot [14] registered May 13 08:23:05.907124 kernel: acpiphp: Slot [15] registered May 13 08:23:05.907132 kernel: acpiphp: Slot [16] registered May 13 08:23:05.907142 kernel: acpiphp: Slot [17] registered May 13 08:23:05.907150 kernel: acpiphp: Slot [18] registered May 13 08:23:05.907157 kernel: acpiphp: Slot [19] registered May 13 08:23:05.907165 kernel: acpiphp: Slot [20] registered May 13 08:23:05.907173 kernel: acpiphp: Slot [21] registered May 13 08:23:05.907181 kernel: acpiphp: Slot [22] registered May 13 08:23:05.907189 kernel: acpiphp: Slot [23] registered May 13 08:23:05.907197 kernel: acpiphp: Slot [24] registered May 13 08:23:05.907204 kernel: acpiphp: Slot [25] registered May 13 08:23:05.907212 kernel: acpiphp: Slot [26] registered May 13 08:23:05.907222 kernel: acpiphp: Slot [27] registered May 13 08:23:05.907230 kernel: acpiphp: Slot [28] registered May 13 08:23:05.907238 kernel: acpiphp: Slot [29] registered May 13 08:23:05.907246 kernel: acpiphp: Slot [30] registered May 13 08:23:05.907254 kernel: acpiphp: Slot [31] registered May 13 08:23:05.907262 kernel: PCI host bridge to bus 0000:00 May 13 08:23:05.907347 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 08:23:05.907423 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 08:23:05.907500 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 08:23:05.907591 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 08:23:05.907675 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 08:23:05.907748 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 08:23:05.907847 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 08:23:05.907939 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 08:23:05.908037 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 08:23:05.908122 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 08:23:05.908206 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 08:23:05.908793 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 08:23:05.908886 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 08:23:05.908974 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 08:23:05.909078 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 08:23:05.909175 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 08:23:05.909264 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 08:23:05.909470 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 08:23:05.909567 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 08:23:05.909689 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 08:23:05.909779 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 08:23:05.909868 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 08:23:05.909965 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 08:23:05.910060 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 08:23:05.910144 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 08:23:05.910227 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 08:23:05.910308 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 08:23:05.910391 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 08:23:05.910479 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 08:23:05.910567 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 08:23:05.910682 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 08:23:05.910764 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 08:23:05.910852 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 08:23:05.910947 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 08:23:05.911031 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 08:23:05.911129 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 08:23:05.911220 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 08:23:05.911309 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 08:23:05.911402 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 08:23:05.911414 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 08:23:05.911423 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 08:23:05.911432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 08:23:05.911441 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 08:23:05.911452 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 08:23:05.911461 kernel: iommu: Default domain type: Translated May 13 08:23:05.911470 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 08:23:05.911558 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 08:23:05.911666 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 08:23:05.911757 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 08:23:05.911770 kernel: vgaarb: loaded May 13 08:23:05.911779 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 08:23:05.911788 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 08:23:05.911801 kernel: PTP clock support registered May 13 08:23:05.911809 kernel: PCI: Using ACPI for IRQ routing May 13 08:23:05.911818 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 08:23:05.911827 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 08:23:05.911835 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 08:23:05.911844 kernel: clocksource: Switched to clocksource kvm-clock May 13 08:23:05.911852 kernel: VFS: Disk quotas dquot_6.6.0 May 13 08:23:05.911861 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 08:23:05.911870 kernel: pnp: PnP ACPI init May 13 08:23:05.911962 kernel: pnp 00:03: [dma 2] May 13 08:23:05.911977 kernel: pnp: PnP ACPI: found 5 devices May 13 08:23:05.911987 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 08:23:05.911995 kernel: NET: Registered PF_INET protocol family May 13 08:23:05.912003 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 08:23:05.912012 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 08:23:05.912020 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 08:23:05.912028 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 08:23:05.912039 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 08:23:05.912047 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 08:23:05.912055 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:23:05.912064 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:23:05.912072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 08:23:05.912080 kernel: NET: Registered PF_XDP protocol family May 13 08:23:05.912153 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 08:23:05.912226 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 08:23:05.912300 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 08:23:05.912376 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 08:23:05.912448 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 08:23:05.912532 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 08:23:05.912635 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 08:23:05.912719 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 13 08:23:05.912731 kernel: PCI: CLS 0 bytes, default 64 May 13 08:23:05.912739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 08:23:05.912748 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 08:23:05.912759 kernel: Initialise system trusted keyrings May 13 08:23:05.912767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 08:23:05.912776 kernel: Key type asymmetric registered May 13 08:23:05.912784 kernel: Asymmetric key parser 'x509' registered May 13 08:23:05.912792 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 08:23:05.912800 kernel: io scheduler mq-deadline registered May 13 08:23:05.912808 kernel: io scheduler kyber registered May 13 08:23:05.912816 kernel: io scheduler bfq registered May 13 08:23:05.912824 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 08:23:05.912834 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 08:23:05.912842 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 08:23:05.912850 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 08:23:05.912859 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 08:23:05.912867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 08:23:05.912875 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 08:23:05.912883 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 08:23:05.912891 kernel: random: crng init done May 13 08:23:05.912899 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 08:23:05.912908 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 08:23:05.912917 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 08:23:05.912999 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 08:23:05.913076 kernel: rtc_cmos 00:04: registered as rtc0 May 13 08:23:05.913151 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T08:23:05 UTC (1747124585) May 13 08:23:05.913224 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 08:23:05.913236 kernel: NET: Registered PF_INET6 protocol family May 13 08:23:05.913245 kernel: Segment Routing with IPv6 May 13 08:23:05.913255 kernel: In-situ OAM (IOAM) with IPv6 May 13 08:23:05.913263 kernel: NET: Registered PF_PACKET protocol family May 13 08:23:05.913271 kernel: Key type dns_resolver registered May 13 08:23:05.913280 kernel: IPI shorthand broadcast: enabled May 13 08:23:05.913288 kernel: sched_clock: Marking stable (852679959, 163473362)->(1085852816, -69699495) May 13 08:23:05.913296 kernel: registered taskstats version 1 May 13 08:23:05.913304 kernel: Loading compiled-in X.509 certificates May 13 08:23:05.913312 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 08:23:05.913320 kernel: Key type .fscrypt registered May 13 08:23:05.913330 kernel: Key type fscrypt-provisioning registered May 13 08:23:05.913338 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 08:23:05.913346 kernel: ima: Allocated hash algorithm: sha1 May 13 08:23:05.913354 kernel: ima: No architecture policies found May 13 08:23:05.913362 kernel: clk: Disabling unused clocks May 13 08:23:05.913370 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 08:23:05.913378 kernel: Write protecting the kernel read-only data: 28672k May 13 08:23:05.913386 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 08:23:05.913396 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 08:23:05.913404 kernel: Run /init as init process May 13 08:23:05.913412 kernel: with arguments: May 13 08:23:05.913420 kernel: /init May 13 08:23:05.913428 kernel: with environment: May 13 08:23:05.913436 kernel: HOME=/ May 13 08:23:05.913443 kernel: TERM=linux May 13 08:23:05.913451 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 08:23:05.913462 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:23:05.913475 systemd[1]: Detected virtualization kvm. May 13 08:23:05.913483 systemd[1]: Detected architecture x86-64. May 13 08:23:05.913492 systemd[1]: Running in initrd. May 13 08:23:05.913501 systemd[1]: No hostname configured, using default hostname. May 13 08:23:05.913509 systemd[1]: Hostname set to . May 13 08:23:05.913518 systemd[1]: Initializing machine ID from VM UUID. May 13 08:23:05.913527 systemd[1]: Queued start job for default target initrd.target. May 13 08:23:05.913537 systemd[1]: Started systemd-ask-password-console.path. May 13 08:23:05.913546 systemd[1]: Reached target cryptsetup.target. May 13 08:23:05.913554 systemd[1]: Reached target paths.target. May 13 08:23:05.913563 systemd[1]: Reached target slices.target. May 13 08:23:05.913571 systemd[1]: Reached target swap.target. May 13 08:23:05.913595 systemd[1]: Reached target timers.target. May 13 08:23:05.913604 systemd[1]: Listening on iscsid.socket. May 13 08:23:05.913612 systemd[1]: Listening on iscsiuio.socket. May 13 08:23:05.913624 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:23:05.913639 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:23:05.913649 systemd[1]: Listening on systemd-journald.socket. May 13 08:23:05.913658 systemd[1]: Listening on systemd-networkd.socket. May 13 08:23:05.913667 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:23:05.913676 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:23:05.913687 systemd[1]: Reached target sockets.target. May 13 08:23:05.913696 systemd[1]: Starting kmod-static-nodes.service... May 13 08:23:05.913705 systemd[1]: Finished network-cleanup.service. May 13 08:23:05.913714 systemd[1]: Starting systemd-fsck-usr.service... May 13 08:23:05.913723 systemd[1]: Starting systemd-journald.service... May 13 08:23:05.913732 systemd[1]: Starting systemd-modules-load.service... May 13 08:23:05.913741 systemd[1]: Starting systemd-resolved.service... May 13 08:23:05.913750 systemd[1]: Starting systemd-vconsole-setup.service... May 13 08:23:05.913759 systemd[1]: Finished kmod-static-nodes.service. May 13 08:23:05.913769 systemd[1]: Finished systemd-fsck-usr.service. May 13 08:23:05.913778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:23:05.913787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:23:05.913800 systemd-journald[185]: Journal started May 13 08:23:05.913848 systemd-journald[185]: Runtime Journal (/run/log/journal/0df479e74e2a4a1093437d94bdc183f9) is 8.0M, max 78.4M, 70.4M free. May 13 08:23:05.900644 systemd-modules-load[186]: Inserted module 'overlay' May 13 08:23:05.944034 systemd[1]: Started systemd-resolved.service. May 13 08:23:05.944061 kernel: audit: type=1130 audit(1747124585.935:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.907197 systemd-resolved[187]: Positive Trust Anchors: May 13 08:23:05.954120 systemd[1]: Started systemd-journald.service. May 13 08:23:05.954140 kernel: audit: type=1130 audit(1747124585.943:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.954153 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 08:23:05.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.907207 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:23:05.959926 kernel: audit: type=1130 audit(1747124585.954:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.907243 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:23:05.968357 kernel: audit: type=1130 audit(1747124585.959:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.968375 kernel: Bridge firewalling registered May 13 08:23:05.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.911911 systemd-resolved[187]: Defaulting to hostname 'linux'. May 13 08:23:05.954911 systemd[1]: Finished systemd-vconsole-setup.service. May 13 08:23:05.960544 systemd[1]: Reached target nss-lookup.target. May 13 08:23:05.961250 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 08:23:05.969554 systemd[1]: Starting dracut-cmdline-ask.service... May 13 08:23:05.989767 kernel: audit: type=1130 audit(1747124585.983:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:05.983814 systemd[1]: Finished dracut-cmdline-ask.service. May 13 08:23:05.985052 systemd[1]: Starting dracut-cmdline.service... May 13 08:23:05.994862 kernel: SCSI subsystem initialized May 13 08:23:06.003143 dracut-cmdline[203]: dracut-dracut-053 May 13 08:23:06.004852 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:06.015892 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 08:23:06.015921 kernel: device-mapper: uevent: version 1.0.3 May 13 08:23:06.018094 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 08:23:06.022690 systemd-modules-load[186]: Inserted module 'dm_multipath' May 13 08:23:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.024698 systemd[1]: Finished systemd-modules-load.service. May 13 08:23:06.030960 kernel: audit: type=1130 audit(1747124586.024:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.031176 systemd[1]: Starting systemd-sysctl.service... May 13 08:23:06.036920 systemd[1]: Finished systemd-sysctl.service. May 13 08:23:06.042779 kernel: audit: type=1130 audit(1747124586.037:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.075721 kernel: Loading iSCSI transport class v2.0-870. May 13 08:23:06.096642 kernel: iscsi: registered transport (tcp) May 13 08:23:06.122918 kernel: iscsi: registered transport (qla4xxx) May 13 08:23:06.122996 kernel: QLogic iSCSI HBA Driver May 13 08:23:06.149823 systemd[1]: Finished dracut-cmdline.service. May 13 08:23:06.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.151106 systemd[1]: Starting dracut-pre-udev.service... May 13 08:23:06.156597 kernel: audit: type=1130 audit(1747124586.149:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.205640 kernel: raid6: sse2x4 gen() 12971 MB/s May 13 08:23:06.223612 kernel: raid6: sse2x4 xor() 7059 MB/s May 13 08:23:06.241660 kernel: raid6: sse2x2 gen() 14222 MB/s May 13 08:23:06.259647 kernel: raid6: sse2x2 xor() 8690 MB/s May 13 08:23:06.277659 kernel: raid6: sse2x1 gen() 11182 MB/s May 13 08:23:06.296133 kernel: raid6: sse2x1 xor() 6977 MB/s May 13 08:23:06.296188 kernel: raid6: using algorithm sse2x2 gen() 14222 MB/s May 13 08:23:06.296214 kernel: raid6: .... xor() 8690 MB/s, rmw enabled May 13 08:23:06.297245 kernel: raid6: using ssse3x2 recovery algorithm May 13 08:23:06.317168 kernel: xor: measuring software checksum speed May 13 08:23:06.317228 kernel: prefetch64-sse : 17147 MB/sec May 13 08:23:06.318543 kernel: generic_sse : 15628 MB/sec May 13 08:23:06.318629 kernel: xor: using function: prefetch64-sse (17147 MB/sec) May 13 08:23:06.438632 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 08:23:06.450680 systemd[1]: Finished dracut-pre-udev.service. May 13 08:23:06.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.451000 audit: BPF prog-id=7 op=LOAD May 13 08:23:06.457000 audit: BPF prog-id=8 op=LOAD May 13 08:23:06.457596 kernel: audit: type=1130 audit(1747124586.451:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.457929 systemd[1]: Starting systemd-udevd.service... May 13 08:23:06.471122 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 13 08:23:06.475793 systemd[1]: Started systemd-udevd.service. May 13 08:23:06.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.486216 systemd[1]: Starting dracut-pre-trigger.service... May 13 08:23:06.507796 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 13 08:23:06.550864 systemd[1]: Finished dracut-pre-trigger.service. May 13 08:23:06.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.553405 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:23:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:06.596999 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:23:06.656613 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 08:23:06.689048 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 08:23:06.689068 kernel: GPT:17805311 != 20971519 May 13 08:23:06.689080 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 08:23:06.689099 kernel: GPT:17805311 != 20971519 May 13 08:23:06.689110 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 08:23:06.689121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:06.693615 kernel: libata version 3.00 loaded. May 13 08:23:06.699835 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 08:23:06.722135 kernel: scsi host0: ata_piix May 13 08:23:06.722273 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) May 13 08:23:06.722295 kernel: scsi host1: ata_piix May 13 08:23:06.722425 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 08:23:06.722441 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 08:23:06.724864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 08:23:06.774609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 08:23:06.782568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:23:06.785769 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 08:23:06.786338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 08:23:06.789685 systemd[1]: Starting disk-uuid.service... May 13 08:23:06.801177 disk-uuid[471]: Primary Header is updated. May 13 08:23:06.801177 disk-uuid[471]: Secondary Entries is updated. May 13 08:23:06.801177 disk-uuid[471]: Secondary Header is updated. May 13 08:23:06.806633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:06.811606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:07.827727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:07.829389 disk-uuid[472]: The operation has completed successfully. May 13 08:23:07.903968 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 08:23:07.905953 systemd[1]: Finished disk-uuid.service. May 13 08:23:07.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:07.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:07.928042 systemd[1]: Starting verity-setup.service... May 13 08:23:07.966688 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 08:23:08.072746 systemd[1]: Found device dev-mapper-usr.device. May 13 08:23:08.076812 systemd[1]: Mounting sysusr-usr.mount... May 13 08:23:08.083511 systemd[1]: Finished verity-setup.service. May 13 08:23:08.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.214631 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 08:23:08.215208 systemd[1]: Mounted sysusr-usr.mount. May 13 08:23:08.215962 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 08:23:08.216762 systemd[1]: Starting ignition-setup.service... May 13 08:23:08.217895 systemd[1]: Starting parse-ip-for-networkd.service... May 13 08:23:08.246993 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:08.247045 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:08.247059 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:08.267833 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 08:23:08.288739 systemd[1]: Finished ignition-setup.service. May 13 08:23:08.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.290740 systemd[1]: Starting ignition-fetch-offline.service... May 13 08:23:08.354463 systemd[1]: Finished parse-ip-for-networkd.service. May 13 08:23:08.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.355000 audit: BPF prog-id=9 op=LOAD May 13 08:23:08.356437 systemd[1]: Starting systemd-networkd.service... May 13 08:23:08.381038 systemd-networkd[642]: lo: Link UP May 13 08:23:08.381044 systemd-networkd[642]: lo: Gained carrier May 13 08:23:08.381551 systemd-networkd[642]: Enumeration completed May 13 08:23:08.381848 systemd-networkd[642]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:23:08.383288 systemd-networkd[642]: eth0: Link UP May 13 08:23:08.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.383292 systemd-networkd[642]: eth0: Gained carrier May 13 08:23:08.383706 systemd[1]: Started systemd-networkd.service. May 13 08:23:08.387574 systemd[1]: Reached target network.target. May 13 08:23:08.391607 systemd[1]: Starting iscsiuio.service... May 13 08:23:08.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.402093 systemd[1]: Started iscsiuio.service. May 13 08:23:08.403627 systemd[1]: Starting iscsid.service... May 13 08:23:08.405922 systemd-networkd[642]: eth0: DHCPv4 address 172.24.4.25/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:23:08.407514 iscsid[647]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 08:23:08.407514 iscsid[647]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 08:23:08.407514 iscsid[647]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 08:23:08.407514 iscsid[647]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 08:23:08.407514 iscsid[647]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 08:23:08.407514 iscsid[647]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 08:23:08.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.409700 systemd[1]: Started iscsid.service. May 13 08:23:08.412300 systemd[1]: Starting dracut-initqueue.service... May 13 08:23:08.426543 systemd[1]: Finished dracut-initqueue.service. May 13 08:23:08.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.427800 systemd[1]: Reached target remote-fs-pre.target. May 13 08:23:08.428777 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:23:08.429898 systemd[1]: Reached target remote-fs.target. May 13 08:23:08.431886 systemd[1]: Starting dracut-pre-mount.service... May 13 08:23:08.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.443241 systemd[1]: Finished dracut-pre-mount.service. May 13 08:23:08.580152 ignition[604]: Ignition 2.14.0 May 13 08:23:08.582019 ignition[604]: Stage: fetch-offline May 13 08:23:08.582786 ignition[604]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:08.582818 ignition[604]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:08.584068 ignition[604]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:08.586017 systemd[1]: Finished ignition-fetch-offline.service. May 13 08:23:08.584232 ignition[604]: parsed url from cmdline: "" May 13 08:23:08.584239 ignition[604]: no config URL provided May 13 08:23:08.584247 ignition[604]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:23:08.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.584267 ignition[604]: no config at "/usr/lib/ignition/user.ign" May 13 08:23:08.589414 systemd[1]: Starting ignition-fetch.service... May 13 08:23:08.584276 ignition[604]: failed to fetch config: resource requires networking May 13 08:23:08.584428 ignition[604]: Ignition finished successfully May 13 08:23:08.598325 ignition[666]: Ignition 2.14.0 May 13 08:23:08.598332 ignition[666]: Stage: fetch May 13 08:23:08.598475 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:08.598495 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:08.600346 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:08.600488 ignition[666]: parsed url from cmdline: "" May 13 08:23:08.600492 ignition[666]: no config URL provided May 13 08:23:08.600499 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:23:08.600508 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 13 08:23:08.603173 ignition[666]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 08:23:08.603658 ignition[666]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 08:23:08.603700 ignition[666]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 08:23:08.891878 ignition[666]: GET result: OK May 13 08:23:08.892197 ignition[666]: parsing config with SHA512: d5a1e2b61679f02b5a71ccf62ad2020b912119afe1f8c4dc5cf72650ca68588f2571114b45de32e1b43fafd923006c38ed37b336f837572a45d5b11c506d3fdd May 13 08:23:08.914325 unknown[666]: fetched base config from "system" May 13 08:23:08.914360 unknown[666]: fetched base config from "system" May 13 08:23:08.915667 ignition[666]: fetch: fetch complete May 13 08:23:08.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.914375 unknown[666]: fetched user config from "openstack" May 13 08:23:08.915681 ignition[666]: fetch: fetch passed May 13 08:23:08.918507 systemd[1]: Finished ignition-fetch.service. May 13 08:23:08.915764 ignition[666]: Ignition finished successfully May 13 08:23:08.922303 systemd[1]: Starting ignition-kargs.service... May 13 08:23:08.954873 ignition[672]: Ignition 2.14.0 May 13 08:23:08.954901 ignition[672]: Stage: kargs May 13 08:23:08.955190 ignition[672]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:08.955238 ignition[672]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:08.957456 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:08.960444 ignition[672]: kargs: kargs passed May 13 08:23:08.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.961542 systemd[1]: Finished ignition-kargs.service. May 13 08:23:08.960555 ignition[672]: Ignition finished successfully May 13 08:23:08.963200 systemd[1]: Starting ignition-disks.service... May 13 08:23:08.981342 ignition[677]: Ignition 2.14.0 May 13 08:23:08.981377 ignition[677]: Stage: disks May 13 08:23:08.981704 ignition[677]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:08.981760 ignition[677]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:08.983978 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:08.986746 ignition[677]: disks: disks passed May 13 08:23:08.988354 systemd[1]: Finished ignition-disks.service. May 13 08:23:08.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:08.986840 ignition[677]: Ignition finished successfully May 13 08:23:08.990632 systemd[1]: Reached target initrd-root-device.target. May 13 08:23:08.992902 systemd[1]: Reached target local-fs-pre.target. May 13 08:23:08.995208 systemd[1]: Reached target local-fs.target. May 13 08:23:08.997473 systemd[1]: Reached target sysinit.target. May 13 08:23:08.999805 systemd[1]: Reached target basic.target. May 13 08:23:09.003463 systemd[1]: Starting systemd-fsck-root.service... May 13 08:23:09.033517 systemd-fsck[684]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 08:23:09.054629 systemd[1]: Finished systemd-fsck-root.service. May 13 08:23:09.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.057424 systemd[1]: Mounting sysroot.mount... May 13 08:23:09.080633 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 08:23:09.081795 systemd[1]: Mounted sysroot.mount. May 13 08:23:09.083038 systemd[1]: Reached target initrd-root-fs.target. May 13 08:23:09.087351 systemd[1]: Mounting sysroot-usr.mount... May 13 08:23:09.089237 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 08:23:09.090653 systemd[1]: Starting flatcar-openstack-hostname.service... May 13 08:23:09.096896 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 08:23:09.096971 systemd[1]: Reached target ignition-diskful.target. May 13 08:23:09.102281 systemd[1]: Mounted sysroot-usr.mount. May 13 08:23:09.106012 systemd[1]: Starting initrd-setup-root.service... May 13 08:23:09.121459 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:23:09.124497 initrd-setup-root[695]: cut: /sysroot/etc/passwd: No such file or directory May 13 08:23:09.148223 initrd-setup-root[704]: cut: /sysroot/etc/group: No such file or directory May 13 08:23:09.150896 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (696) May 13 08:23:09.155868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:09.155932 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:09.155960 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:09.160774 initrd-setup-root[728]: cut: /sysroot/etc/shadow: No such file or directory May 13 08:23:09.168009 initrd-setup-root[738]: cut: /sysroot/etc/gshadow: No such file or directory May 13 08:23:09.172825 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:23:09.259543 systemd[1]: Finished initrd-setup-root.service. May 13 08:23:09.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.261299 systemd[1]: Starting ignition-mount.service... May 13 08:23:09.273062 systemd[1]: Starting sysroot-boot.service... May 13 08:23:09.280438 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 13 08:23:09.282265 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 13 08:23:09.295293 ignition[758]: INFO : Ignition 2.14.0 May 13 08:23:09.295293 ignition[758]: INFO : Stage: mount May 13 08:23:09.296559 ignition[758]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:09.296559 ignition[758]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:09.296559 ignition[758]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:09.299826 ignition[758]: INFO : mount: mount passed May 13 08:23:09.299826 ignition[758]: INFO : Ignition finished successfully May 13 08:23:09.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.299681 systemd[1]: Finished ignition-mount.service. May 13 08:23:09.319489 systemd[1]: Finished sysroot-boot.service. May 13 08:23:09.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.338204 coreos-metadata[690]: May 13 08:23:09.338 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 08:23:09.353469 coreos-metadata[690]: May 13 08:23:09.353 INFO Fetch successful May 13 08:23:09.353469 coreos-metadata[690]: May 13 08:23:09.353 INFO wrote hostname ci-3510-3-7-n-f896a7891b.novalocal to /sysroot/etc/hostname May 13 08:23:09.356518 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 08:23:09.356644 systemd[1]: Finished flatcar-openstack-hostname.service. May 13 08:23:09.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:09.358776 systemd[1]: Starting ignition-files.service... May 13 08:23:09.366062 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:23:09.381634 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (767) May 13 08:23:09.388532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:09.388552 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:09.388564 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:09.404667 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:23:09.424355 ignition[786]: INFO : Ignition 2.14.0 May 13 08:23:09.424355 ignition[786]: INFO : Stage: files May 13 08:23:09.427203 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:09.427203 ignition[786]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:09.427203 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:09.435377 ignition[786]: DEBUG : files: compiled without relabeling support, skipping May 13 08:23:09.438161 ignition[786]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 08:23:09.438161 ignition[786]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 08:23:09.443101 ignition[786]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 08:23:09.445305 ignition[786]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 08:23:09.447416 ignition[786]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 08:23:09.447416 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:23:09.446708 unknown[786]: wrote ssh authorized keys file for user: core May 13 08:23:09.453217 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:23:09.453217 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 08:23:09.453217 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 08:23:09.542143 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 08:23:09.880223 systemd-networkd[642]: eth0: Gained IPv6LL May 13 08:23:10.046150 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 08:23:10.046150 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 08:23:10.050965 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 08:23:10.050965 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 08:23:10.050965 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 08:23:10.050965 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 08:23:10.059310 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 08:23:10.059310 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 08:23:10.059310 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:10.074879 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 08:23:10.908125 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 08:23:13.228361 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:13.229788 ignition[786]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 13 08:23:13.230523 ignition[786]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 13 08:23:13.231303 ignition[786]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 08:23:13.232783 ignition[786]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:23:13.234005 ignition[786]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:23:13.234005 ignition[786]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 08:23:13.234005 ignition[786]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 08:23:13.236494 ignition[786]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 08:23:13.236494 ignition[786]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 08:23:13.236494 ignition[786]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 08:23:13.236494 ignition[786]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:23:13.236494 ignition[786]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:23:13.236494 ignition[786]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 08:23:13.236494 ignition[786]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 08:23:13.243120 ignition[786]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 08:23:13.243120 ignition[786]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 08:23:13.243120 ignition[786]: INFO : files: files passed May 13 08:23:13.243120 ignition[786]: INFO : Ignition finished successfully May 13 08:23:13.254880 kernel: kauditd_printk_skb: 26 callbacks suppressed May 13 08:23:13.254920 kernel: audit: type=1130 audit(1747124593.244:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.242534 systemd[1]: Finished ignition-files.service. May 13 08:23:13.245737 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 08:23:13.253780 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 08:23:13.254555 systemd[1]: Starting ignition-quench.service... May 13 08:23:13.259048 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 08:23:13.259146 systemd[1]: Finished ignition-quench.service. May 13 08:23:13.270870 kernel: audit: type=1130 audit(1747124593.259:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.270924 kernel: audit: type=1131 audit(1747124593.259:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.271013 initrd-setup-root-after-ignition[811]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 08:23:13.277406 kernel: audit: type=1130 audit(1747124593.270:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.265699 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 08:23:13.271430 systemd[1]: Reached target ignition-complete.target. May 13 08:23:13.278513 systemd[1]: Starting initrd-parse-etc.service... May 13 08:23:13.293730 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 08:23:13.294381 systemd[1]: Finished initrd-parse-etc.service. May 13 08:23:13.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.299815 systemd[1]: Reached target initrd-fs.target. May 13 08:23:13.305511 kernel: audit: type=1130 audit(1747124593.294:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.305534 kernel: audit: type=1131 audit(1747124593.299:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.305034 systemd[1]: Reached target initrd.target. May 13 08:23:13.306066 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 08:23:13.307092 systemd[1]: Starting dracut-pre-pivot.service... May 13 08:23:13.320043 systemd[1]: Finished dracut-pre-pivot.service. May 13 08:23:13.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.325615 kernel: audit: type=1130 audit(1747124593.320:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.326327 systemd[1]: Starting initrd-cleanup.service... May 13 08:23:13.337547 systemd[1]: Stopped target nss-lookup.target. May 13 08:23:13.338641 systemd[1]: Stopped target remote-cryptsetup.target. May 13 08:23:13.339746 systemd[1]: Stopped target timers.target. May 13 08:23:13.340765 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 08:23:13.341428 systemd[1]: Stopped dracut-pre-pivot.service. May 13 08:23:13.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.343853 systemd[1]: Stopped target initrd.target. May 13 08:23:13.347862 kernel: audit: type=1131 audit(1747124593.341:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.348040 systemd[1]: Stopped target basic.target. May 13 08:23:13.349140 systemd[1]: Stopped target ignition-complete.target. May 13 08:23:13.349758 systemd[1]: Stopped target ignition-diskful.target. May 13 08:23:13.350842 systemd[1]: Stopped target initrd-root-device.target. May 13 08:23:13.351895 systemd[1]: Stopped target remote-fs.target. May 13 08:23:13.352883 systemd[1]: Stopped target remote-fs-pre.target. May 13 08:23:13.353835 systemd[1]: Stopped target sysinit.target. May 13 08:23:13.354772 systemd[1]: Stopped target local-fs.target. May 13 08:23:13.355779 systemd[1]: Stopped target local-fs-pre.target. May 13 08:23:13.356753 systemd[1]: Stopped target swap.target. May 13 08:23:13.357688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 08:23:13.363477 kernel: audit: type=1131 audit(1747124593.358:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.357875 systemd[1]: Stopped dracut-pre-mount.service. May 13 08:23:13.358814 systemd[1]: Stopped target cryptsetup.target. May 13 08:23:13.370021 kernel: audit: type=1131 audit(1747124593.364:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.364043 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 08:23:13.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.364207 systemd[1]: Stopped dracut-initqueue.service. May 13 08:23:13.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.365176 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 08:23:13.365334 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 08:23:13.370748 systemd[1]: ignition-files.service: Deactivated successfully. May 13 08:23:13.374368 iscsid[647]: iscsid shutting down. May 13 08:23:13.370899 systemd[1]: Stopped ignition-files.service. May 13 08:23:13.372781 systemd[1]: Stopping ignition-mount.service... May 13 08:23:13.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.378809 systemd[1]: Stopping iscsid.service... May 13 08:23:13.381102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 08:23:13.381253 systemd[1]: Stopped kmod-static-nodes.service. May 13 08:23:13.382664 systemd[1]: Stopping sysroot-boot.service... May 13 08:23:13.383131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 08:23:13.383260 systemd[1]: Stopped systemd-udev-trigger.service. May 13 08:23:13.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.389242 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 08:23:13.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.389649 systemd[1]: Stopped dracut-pre-trigger.service. May 13 08:23:13.393672 systemd[1]: iscsid.service: Deactivated successfully. May 13 08:23:13.394274 systemd[1]: Stopped iscsid.service. May 13 08:23:13.398973 systemd[1]: Stopping iscsiuio.service... May 13 08:23:13.399542 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 08:23:13.399658 systemd[1]: Stopped iscsiuio.service. May 13 08:23:13.400783 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 08:23:13.401515 systemd[1]: Finished initrd-cleanup.service. May 13 08:23:13.407449 ignition[824]: INFO : Ignition 2.14.0 May 13 08:23:13.407449 ignition[824]: INFO : Stage: umount May 13 08:23:13.407449 ignition[824]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:13.407449 ignition[824]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:13.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.415748 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:13.415748 ignition[824]: INFO : umount: umount passed May 13 08:23:13.415748 ignition[824]: INFO : Ignition finished successfully May 13 08:23:13.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.413500 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 08:23:13.414446 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 08:23:13.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.414523 systemd[1]: Stopped ignition-mount.service. May 13 08:23:13.415358 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 08:23:13.415412 systemd[1]: Stopped ignition-disks.service. May 13 08:23:13.416061 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 08:23:13.416100 systemd[1]: Stopped ignition-kargs.service. May 13 08:23:13.417455 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 08:23:13.417493 systemd[1]: Stopped ignition-fetch.service. May 13 08:23:13.418460 systemd[1]: Stopped target network.target. May 13 08:23:13.419472 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 08:23:13.419514 systemd[1]: Stopped ignition-fetch-offline.service. May 13 08:23:13.420633 systemd[1]: Stopped target paths.target. May 13 08:23:13.421690 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 08:23:13.425673 systemd[1]: Stopped systemd-ask-password-console.path. May 13 08:23:13.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.426827 systemd[1]: Stopped target slices.target. May 13 08:23:13.427835 systemd[1]: Stopped target sockets.target. May 13 08:23:13.429004 systemd[1]: iscsid.socket: Deactivated successfully. May 13 08:23:13.429039 systemd[1]: Closed iscsid.socket. May 13 08:23:13.429955 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 08:23:13.429985 systemd[1]: Closed iscsiuio.socket. May 13 08:23:13.430890 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 08:23:13.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.430945 systemd[1]: Stopped ignition-setup.service. May 13 08:23:13.432246 systemd[1]: Stopping systemd-networkd.service... May 13 08:23:13.433388 systemd[1]: Stopping systemd-resolved.service... May 13 08:23:13.436708 systemd-networkd[642]: eth0: DHCPv6 lease lost May 13 08:23:13.441000 audit: BPF prog-id=9 op=UNLOAD May 13 08:23:13.438012 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 08:23:13.438102 systemd[1]: Stopped systemd-networkd.service. May 13 08:23:13.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.439268 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 08:23:13.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.439301 systemd[1]: Closed systemd-networkd.socket. May 13 08:23:13.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.440800 systemd[1]: Stopping network-cleanup.service... May 13 08:23:13.444211 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 08:23:13.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.444268 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 08:23:13.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.445264 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 08:23:13.454000 audit: BPF prog-id=6 op=UNLOAD May 13 08:23:13.445303 systemd[1]: Stopped systemd-sysctl.service. May 13 08:23:13.446617 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 08:23:13.446656 systemd[1]: Stopped systemd-modules-load.service. May 13 08:23:13.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.447560 systemd[1]: Stopping systemd-udevd.service... May 13 08:23:13.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.449521 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 08:23:13.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.450024 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 08:23:13.450117 systemd[1]: Stopped systemd-resolved.service. May 13 08:23:13.453458 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 08:23:13.453564 systemd[1]: Stopped systemd-udevd.service. May 13 08:23:13.455102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 08:23:13.455140 systemd[1]: Closed systemd-udevd-control.socket. May 13 08:23:13.457512 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 08:23:13.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.457546 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 08:23:13.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.458465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 08:23:13.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.458501 systemd[1]: Stopped dracut-pre-udev.service. May 13 08:23:13.459683 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 08:23:13.459719 systemd[1]: Stopped dracut-cmdline.service. May 13 08:23:13.460658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 08:23:13.460696 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 08:23:13.462256 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 08:23:13.462803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 08:23:13.462846 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 08:23:13.470812 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 08:23:13.470948 systemd[1]: Stopped network-cleanup.service. May 13 08:23:13.471692 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 08:23:13.471782 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 08:23:13.691261 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 08:23:13.691486 systemd[1]: Stopped sysroot-boot.service. May 13 08:23:13.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.694288 systemd[1]: Reached target initrd-switch-root.target. May 13 08:23:13.696499 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 08:23:13.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:13.696667 systemd[1]: Stopped initrd-setup-root.service. May 13 08:23:13.700482 systemd[1]: Starting initrd-switch-root.service... May 13 08:23:13.715981 systemd[1]: Switching root. May 13 08:23:13.717000 audit: BPF prog-id=8 op=UNLOAD May 13 08:23:13.717000 audit: BPF prog-id=7 op=UNLOAD May 13 08:23:13.719000 audit: BPF prog-id=5 op=UNLOAD May 13 08:23:13.719000 audit: BPF prog-id=4 op=UNLOAD May 13 08:23:13.719000 audit: BPF prog-id=3 op=UNLOAD May 13 08:23:13.745190 systemd-journald[185]: Journal stopped May 13 08:23:17.955362 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 13 08:23:17.955431 kernel: SELinux: Class mctp_socket not defined in policy. May 13 08:23:17.955450 kernel: SELinux: Class anon_inode not defined in policy. May 13 08:23:17.955462 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 08:23:17.955476 kernel: SELinux: policy capability network_peer_controls=1 May 13 08:23:17.955487 kernel: SELinux: policy capability open_perms=1 May 13 08:23:17.955498 kernel: SELinux: policy capability extended_socket_class=1 May 13 08:23:17.955509 kernel: SELinux: policy capability always_check_network=0 May 13 08:23:17.956211 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 08:23:17.956235 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 08:23:17.956246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 08:23:17.956263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 08:23:17.956275 systemd[1]: Successfully loaded SELinux policy in 97.636ms. May 13 08:23:17.956301 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.796ms. May 13 08:23:17.956315 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:23:17.956328 systemd[1]: Detected virtualization kvm. May 13 08:23:17.956339 systemd[1]: Detected architecture x86-64. May 13 08:23:17.956351 systemd[1]: Detected first boot. May 13 08:23:17.956363 systemd[1]: Hostname set to . May 13 08:23:17.956375 systemd[1]: Initializing machine ID from VM UUID. May 13 08:23:17.956389 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 08:23:17.956401 systemd[1]: Populated /etc with preset unit settings. May 13 08:23:17.956419 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:23:17.956431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:23:17.956445 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:23:17.956458 systemd[1]: Queued start job for default target multi-user.target. May 13 08:23:17.956472 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 08:23:17.956485 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 08:23:17.956497 systemd[1]: Created slice system-addon\x2drun.slice. May 13 08:23:17.956508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 13 08:23:17.956520 systemd[1]: Created slice system-getty.slice. May 13 08:23:17.956532 systemd[1]: Created slice system-modprobe.slice. May 13 08:23:17.956543 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 08:23:17.956556 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 08:23:17.956568 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 08:23:17.956603 systemd[1]: Created slice user.slice. May 13 08:23:17.956616 systemd[1]: Started systemd-ask-password-console.path. May 13 08:23:17.956628 systemd[1]: Started systemd-ask-password-wall.path. May 13 08:23:17.956640 systemd[1]: Set up automount boot.automount. May 13 08:23:17.956651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 08:23:17.956663 systemd[1]: Reached target integritysetup.target. May 13 08:23:17.956676 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:23:17.956690 systemd[1]: Reached target remote-fs.target. May 13 08:23:17.956702 systemd[1]: Reached target slices.target. May 13 08:23:17.956714 systemd[1]: Reached target swap.target. May 13 08:23:17.956726 systemd[1]: Reached target torcx.target. May 13 08:23:17.956738 systemd[1]: Reached target veritysetup.target. May 13 08:23:17.956749 systemd[1]: Listening on systemd-coredump.socket. May 13 08:23:17.956761 systemd[1]: Listening on systemd-initctl.socket. May 13 08:23:17.956772 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:23:17.956784 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:23:17.956798 systemd[1]: Listening on systemd-journald.socket. May 13 08:23:17.956809 systemd[1]: Listening on systemd-networkd.socket. May 13 08:23:17.956821 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:23:17.956832 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:23:17.956843 systemd[1]: Listening on systemd-userdbd.socket. May 13 08:23:17.956855 systemd[1]: Mounting dev-hugepages.mount... May 13 08:23:17.956868 systemd[1]: Mounting dev-mqueue.mount... May 13 08:23:17.956879 systemd[1]: Mounting media.mount... May 13 08:23:17.956891 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:17.956903 systemd[1]: Mounting sys-kernel-debug.mount... May 13 08:23:17.956916 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 08:23:17.956928 systemd[1]: Mounting tmp.mount... May 13 08:23:17.956939 systemd[1]: Starting flatcar-tmpfiles.service... May 13 08:23:17.956951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:17.956963 systemd[1]: Starting kmod-static-nodes.service... May 13 08:23:17.956975 systemd[1]: Starting modprobe@configfs.service... May 13 08:23:17.956986 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:17.957002 systemd[1]: Starting modprobe@drm.service... May 13 08:23:17.957014 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:17.957028 systemd[1]: Starting modprobe@fuse.service... May 13 08:23:17.957039 systemd[1]: Starting modprobe@loop.service... May 13 08:23:17.957052 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 08:23:17.957064 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 08:23:17.957076 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 08:23:17.957087 systemd[1]: Starting systemd-journald.service... May 13 08:23:17.957099 systemd[1]: Starting systemd-modules-load.service... May 13 08:23:17.957111 systemd[1]: Starting systemd-network-generator.service... May 13 08:23:17.957122 systemd[1]: Starting systemd-remount-fs.service... May 13 08:23:17.957136 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:23:17.957148 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:17.957160 systemd[1]: Mounted dev-hugepages.mount. May 13 08:23:17.957171 systemd[1]: Mounted dev-mqueue.mount. May 13 08:23:17.957186 systemd-journald[967]: Journal started May 13 08:23:17.957235 systemd-journald[967]: Runtime Journal (/run/log/journal/0df479e74e2a4a1093437d94bdc183f9) is 8.0M, max 78.4M, 70.4M free. May 13 08:23:17.801000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 08:23:17.802000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 08:23:17.952000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 08:23:17.952000 audit[967]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffb829ecf0 a2=4000 a3=7fffb829ed8c items=0 ppid=1 pid=967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:17.952000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 08:23:17.961270 systemd[1]: Started systemd-journald.service. May 13 08:23:17.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.962039 systemd[1]: Mounted media.mount. May 13 08:23:17.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.962828 systemd[1]: Mounted sys-kernel-debug.mount. May 13 08:23:17.963758 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 08:23:17.968615 kernel: fuse: init (API version 7.34) May 13 08:23:17.964608 systemd[1]: Mounted tmp.mount. May 13 08:23:17.965480 systemd[1]: Finished kmod-static-nodes.service. May 13 08:23:17.967214 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 08:23:17.967376 systemd[1]: Finished modprobe@configfs.service. May 13 08:23:17.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.969088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:17.969225 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:17.969964 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:23:17.970099 systemd[1]: Finished modprobe@drm.service. May 13 08:23:17.970769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:17.970915 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:17.971804 systemd[1]: Finished systemd-modules-load.service. May 13 08:23:17.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.972472 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 08:23:17.972674 systemd[1]: Finished modprobe@fuse.service. May 13 08:23:17.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.974037 systemd[1]: Finished systemd-network-generator.service. May 13 08:23:17.974671 kernel: loop: module loaded May 13 08:23:17.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.975395 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:17.975646 systemd[1]: Finished modprobe@loop.service. May 13 08:23:17.976416 systemd[1]: Finished systemd-remount-fs.service. May 13 08:23:17.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:17.978013 systemd[1]: Reached target network-pre.target. May 13 08:23:17.980829 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 08:23:17.984966 systemd[1]: Mounting sys-kernel-config.mount... May 13 08:23:17.989081 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 08:23:17.994978 systemd[1]: Starting systemd-hwdb-update.service... May 13 08:23:17.996881 systemd[1]: Starting systemd-journal-flush.service... May 13 08:23:17.997559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:17.999421 systemd[1]: Starting systemd-random-seed.service... May 13 08:23:18.000021 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:18.004649 systemd[1]: Starting systemd-sysctl.service... May 13 08:23:18.008628 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 08:23:18.010365 systemd[1]: Mounted sys-kernel-config.mount. May 13 08:23:18.028716 systemd-journald[967]: Time spent on flushing to /var/log/journal/0df479e74e2a4a1093437d94bdc183f9 is 39.397ms for 1037 entries. May 13 08:23:18.028716 systemd-journald[967]: System Journal (/var/log/journal/0df479e74e2a4a1093437d94bdc183f9) is 8.0M, max 584.8M, 576.8M free. May 13 08:23:18.120005 systemd-journald[967]: Received client request to flush runtime journal. May 13 08:23:18.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.051855 systemd[1]: Finished flatcar-tmpfiles.service. May 13 08:23:18.053780 systemd[1]: Starting systemd-sysusers.service... May 13 08:23:18.067307 systemd[1]: Finished systemd-random-seed.service. May 13 08:23:18.120644 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 08:23:18.067967 systemd[1]: Reached target first-boot-complete.target. May 13 08:23:18.076672 systemd[1]: Finished systemd-sysctl.service. May 13 08:23:18.103928 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:23:18.105865 systemd[1]: Starting systemd-udev-settle.service... May 13 08:23:18.116043 systemd[1]: Finished systemd-sysusers.service. May 13 08:23:18.118048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:23:18.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.121996 systemd[1]: Finished systemd-journal-flush.service. May 13 08:23:18.164029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:23:18.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.725468 systemd[1]: Finished systemd-hwdb-update.service. May 13 08:23:18.739777 kernel: kauditd_printk_skb: 77 callbacks suppressed May 13 08:23:18.739964 kernel: audit: type=1130 audit(1747124598.726:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.729353 systemd[1]: Starting systemd-udevd.service... May 13 08:23:18.784880 systemd-udevd[1024]: Using default interface naming scheme 'v252'. May 13 08:23:18.846835 systemd[1]: Started systemd-udevd.service. May 13 08:23:18.851486 systemd[1]: Starting systemd-networkd.service... May 13 08:23:18.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.852635 kernel: audit: type=1130 audit(1747124598.847:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.897322 systemd[1]: Starting systemd-userdbd.service... May 13 08:23:18.970543 systemd[1]: Started systemd-userdbd.service. May 13 08:23:18.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.977613 kernel: audit: type=1130 audit(1747124598.971:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:18.992901 systemd[1]: Found device dev-ttyS0.device. May 13 08:23:19.016600 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 08:23:19.040288 kernel: ACPI: button: Power Button [PWRF] May 13 08:23:19.052000 audit[1037]: AVC avc: denied { confidentiality } for pid=1037 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 08:23:19.064591 kernel: audit: type=1400 audit(1747124599.052:118): avc: denied { confidentiality } for pid=1037 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 08:23:19.052000 audit[1037]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561a34adf260 a1=338ac a2=7f984490abc5 a3=5 items=110 ppid=1024 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:19.086266 kernel: audit: type=1300 audit(1747124599.052:118): arch=c000003e syscall=175 success=yes exit=0 a0=561a34adf260 a1=338ac a2=7f984490abc5 a3=5 items=110 ppid=1024 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:19.086338 kernel: audit: type=1307 audit(1747124599.052:118): cwd="/" May 13 08:23:19.086359 kernel: audit: type=1302 audit(1747124599.052:118): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: CWD cwd="/" May 13 08:23:19.052000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.085685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:23:19.052000 audit: PATH item=1 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.093615 kernel: audit: type=1302 audit(1747124599.052:118): item=1 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.094779 systemd-networkd[1030]: lo: Link UP May 13 08:23:19.094787 systemd-networkd[1030]: lo: Gained carrier May 13 08:23:19.095261 systemd-networkd[1030]: Enumeration completed May 13 08:23:19.095372 systemd[1]: Started systemd-networkd.service. May 13 08:23:19.052000 audit: PATH item=2 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.097223 systemd-networkd[1030]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:23:19.101698 kernel: audit: type=1302 audit(1747124599.052:118): item=2 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=3 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.102649 systemd-networkd[1030]: eth0: Link UP May 13 08:23:19.102655 systemd-networkd[1030]: eth0: Gained carrier May 13 08:23:19.110610 kernel: audit: type=1302 audit(1747124599.052:118): item=3 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=4 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=5 name=(null) inode=14367 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=6 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=7 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=8 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=9 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=10 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=11 name=(null) inode=14370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=12 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=13 name=(null) inode=14371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=14 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=15 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=16 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=17 name=(null) inode=14373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=18 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=19 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=20 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=21 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=22 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=23 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=24 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=25 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=26 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=27 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=28 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=29 name=(null) inode=14379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=30 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=31 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=32 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=33 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=34 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=35 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=36 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=37 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=38 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=39 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=40 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=41 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=42 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=43 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=44 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=45 name=(null) inode=14387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=46 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=47 name=(null) inode=14388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=48 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=49 name=(null) inode=14389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=50 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=51 name=(null) inode=14390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=52 name=(null) inode=14386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=53 name=(null) inode=14391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=55 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=56 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=57 name=(null) inode=14393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=58 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=59 name=(null) inode=14394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=60 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=61 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=62 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=63 name=(null) inode=14396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=64 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=65 name=(null) inode=14397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=66 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.052000 audit: PATH item=67 name=(null) inode=14398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=68 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=69 name=(null) inode=14399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=70 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=71 name=(null) inode=14400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=72 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=73 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=74 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=75 name=(null) inode=14402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=76 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=77 name=(null) inode=14403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=78 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=79 name=(null) inode=14404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=80 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=81 name=(null) inode=14405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=82 name=(null) inode=14401 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=83 name=(null) inode=14406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=84 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=85 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=86 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=87 name=(null) inode=14408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=88 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=89 name=(null) inode=14409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=90 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=91 name=(null) inode=14410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=92 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=93 name=(null) inode=14411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=94 name=(null) inode=14407 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=95 name=(null) inode=14412 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=96 name=(null) inode=14392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=97 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=98 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=99 name=(null) inode=14414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=100 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=101 name=(null) inode=14415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=102 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=103 name=(null) inode=14416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=104 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=105 name=(null) inode=14417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=106 name=(null) inode=14413 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=107 name=(null) inode=14418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PATH item=109 name=(null) inode=14419 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:19.052000 audit: PROCTITLE proctitle="(udev-worker)" May 13 08:23:19.123622 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 08:23:19.124749 systemd-networkd[1030]: eth0: DHCPv4 address 172.24.4.25/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:23:19.139612 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 08:23:19.144607 kernel: mousedev: PS/2 mouse device common for all mice May 13 08:23:19.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.188036 systemd[1]: Finished systemd-udev-settle.service. May 13 08:23:19.189988 systemd[1]: Starting lvm2-activation-early.service... May 13 08:23:19.222256 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:23:19.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.248010 systemd[1]: Finished lvm2-activation-early.service. May 13 08:23:19.249367 systemd[1]: Reached target cryptsetup.target. May 13 08:23:19.252944 systemd[1]: Starting lvm2-activation.service... May 13 08:23:19.256539 lvm[1061]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:23:19.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.283767 systemd[1]: Finished lvm2-activation.service. May 13 08:23:19.285098 systemd[1]: Reached target local-fs-pre.target. May 13 08:23:19.286258 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 08:23:19.286303 systemd[1]: Reached target local-fs.target. May 13 08:23:19.288082 systemd[1]: Reached target machines.target. May 13 08:23:19.291980 systemd[1]: Starting ldconfig.service... May 13 08:23:19.295241 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:19.295522 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:19.298257 systemd[1]: Starting systemd-boot-update.service... May 13 08:23:19.302873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 08:23:19.307562 systemd[1]: Starting systemd-machine-id-commit.service... May 13 08:23:19.315535 systemd[1]: Starting systemd-sysext.service... May 13 08:23:19.332528 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1064 (bootctl) May 13 08:23:19.334952 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 08:23:19.359438 systemd[1]: Unmounting usr-share-oem.mount... May 13 08:23:19.364535 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 08:23:19.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.367393 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 08:23:19.367664 systemd[1]: Unmounted usr-share-oem.mount. May 13 08:23:19.417657 kernel: loop0: detected capacity change from 0 to 210664 May 13 08:23:19.621565 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 08:23:19.623159 systemd[1]: Finished systemd-machine-id-commit.service. May 13 08:23:19.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.661037 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 08:23:19.692645 kernel: loop1: detected capacity change from 0 to 210664 May 13 08:23:19.734393 (sd-sysext)[1080]: Using extensions 'kubernetes'. May 13 08:23:19.737701 (sd-sysext)[1080]: Merged extensions into '/usr'. May 13 08:23:19.797189 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) May 13 08:23:19.797189 systemd-fsck[1077]: /dev/vda1: 790 files, 120692/258078 clusters May 13 08:23:19.798097 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 08:23:19.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.800429 systemd[1]: Mounting boot.mount... May 13 08:23:19.800968 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:19.802788 systemd[1]: Mounting usr-share-oem.mount... May 13 08:23:19.803559 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:19.805273 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:19.810826 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:19.814358 systemd[1]: Starting modprobe@loop.service... May 13 08:23:19.815051 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:19.815201 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:19.815356 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:19.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.818639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:19.818782 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:19.819630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:19.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.822284 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:19.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.823241 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:19.823395 systemd[1]: Finished modprobe@loop.service. May 13 08:23:19.825713 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:19.825822 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:19.830781 systemd[1]: Mounted usr-share-oem.mount. May 13 08:23:19.835527 systemd[1]: Finished systemd-sysext.service. May 13 08:23:19.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:19.841323 systemd[1]: Starting ensure-sysext.service... May 13 08:23:19.844399 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 08:23:19.852394 systemd[1]: Mounted boot.mount. May 13 08:23:19.856341 systemd[1]: Reloading. May 13 08:23:19.860527 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 08:23:19.869888 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 08:23:19.873439 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 08:23:19.939414 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-13T08:23:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:23:19.939820 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-13T08:23:19Z" level=info msg="torcx already run" May 13 08:23:20.079515 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:23:20.079538 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:23:20.103503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:23:20.173838 systemd[1]: Finished systemd-boot-update.service. May 13 08:23:20.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.174771 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 08:23:20.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.179209 systemd[1]: Starting audit-rules.service... May 13 08:23:20.180923 systemd[1]: Starting clean-ca-certificates.service... May 13 08:23:20.182853 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 08:23:20.189948 systemd[1]: Starting systemd-resolved.service... May 13 08:23:20.195242 systemd[1]: Starting systemd-timesyncd.service... May 13 08:23:20.199837 systemd[1]: Starting systemd-update-utmp.service... May 13 08:23:20.200947 systemd[1]: Finished clean-ca-certificates.service. May 13 08:23:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.213059 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:20.222903 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.223148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:20.222000 audit[1180]: SYSTEM_BOOT pid=1180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 08:23:20.224595 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:20.226236 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:20.227955 systemd[1]: Starting modprobe@loop.service... May 13 08:23:20.230074 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:20.230226 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:20.230359 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:20.230453 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.232649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:20.232878 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:20.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.236018 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:20.240742 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.240978 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:20.242342 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:20.245825 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:20.245988 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:20.246127 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:20.246235 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.247239 systemd[1]: Finished systemd-update-utmp.service. May 13 08:23:20.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.248205 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:20.248362 systemd[1]: Finished modprobe@loop.service. May 13 08:23:20.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.251260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:20.251414 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:20.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.253644 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:20.261758 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 08:23:20.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.265178 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.265444 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:20.267069 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:20.270811 systemd[1]: Starting modprobe@drm.service... May 13 08:23:20.272686 systemd[1]: Starting modprobe@loop.service... May 13 08:23:20.273783 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:20.273916 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:20.277258 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 08:23:20.278033 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:20.278168 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:20.281951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:20.282124 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:20.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.285125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:20.285297 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:20.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.286814 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:20.289496 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:20.289732 systemd[1]: Finished modprobe@loop.service. May 13 08:23:20.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.290765 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:20.292461 systemd[1]: Finished ensure-sysext.service. May 13 08:23:20.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.302135 ldconfig[1063]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 08:23:20.303217 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:23:20.303396 systemd[1]: Finished modprobe@drm.service. May 13 08:23:20.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.315160 systemd[1]: Finished ldconfig.service. May 13 08:23:20.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.317301 systemd[1]: Starting systemd-update-done.service... May 13 08:23:20.326462 systemd[1]: Finished systemd-update-done.service. May 13 08:23:20.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:20.360000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 08:23:20.360000 audit[1216]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc276d5b0 a2=420 a3=0 items=0 ppid=1173 pid=1216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:20.360000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 08:23:20.360976 augenrules[1216]: No rules May 13 08:23:20.362040 systemd[1]: Finished audit-rules.service. May 13 08:23:20.377835 systemd[1]: Started systemd-timesyncd.service. May 13 08:23:20.378420 systemd[1]: Reached target time-set.target. May 13 08:23:20.386158 systemd-resolved[1176]: Positive Trust Anchors: May 13 08:23:20.386454 systemd-resolved[1176]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:23:20.386546 systemd-resolved[1176]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:23:20.393501 systemd-resolved[1176]: Using system hostname 'ci-3510-3-7-n-f896a7891b.novalocal'. May 13 08:23:20.395066 systemd[1]: Started systemd-resolved.service. May 13 08:23:20.395628 systemd[1]: Reached target network.target. May 13 08:23:20.396095 systemd[1]: Reached target nss-lookup.target. May 13 08:23:20.396538 systemd[1]: Reached target sysinit.target. May 13 08:23:20.397067 systemd[1]: Started motdgen.path. May 13 08:23:20.397511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 08:23:20.398150 systemd[1]: Started logrotate.timer. May 13 08:23:20.398677 systemd[1]: Started mdadm.timer. May 13 08:23:20.399109 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 08:23:20.399556 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 08:23:20.399607 systemd[1]: Reached target paths.target. May 13 08:23:20.400030 systemd[1]: Reached target timers.target. May 13 08:23:20.400807 systemd[1]: Listening on dbus.socket. May 13 08:23:20.402712 systemd[1]: Starting docker.socket... May 13 08:23:20.405023 systemd[1]: Listening on sshd.socket. May 13 08:23:20.405615 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:20.405979 systemd[1]: Listening on docker.socket. May 13 08:23:20.406420 systemd[1]: Reached target sockets.target. May 13 08:23:20.406872 systemd[1]: Reached target basic.target. May 13 08:23:20.407441 systemd[1]: System is tainted: cgroupsv1 May 13 08:23:20.407493 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:23:20.407518 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:23:20.408663 systemd[1]: Starting containerd.service... May 13 08:23:20.410106 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 13 08:23:20.416016 systemd[1]: Starting dbus.service... May 13 08:23:20.417622 systemd[1]: Starting enable-oem-cloudinit.service... May 13 08:23:20.420951 systemd[1]: Starting extend-filesystems.service... May 13 08:23:20.421638 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 08:23:20.423134 systemd[1]: Starting motdgen.service... May 13 08:23:20.427078 systemd[1]: Starting prepare-helm.service... May 13 08:23:20.429712 jq[1231]: false May 13 08:23:20.430900 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 08:23:20.433801 systemd[1]: Starting sshd-keygen.service... May 13 08:23:20.440817 systemd-networkd[1030]: eth0: Gained IPv6LL May 13 08:23:20.443258 systemd[1]: Starting systemd-logind.service... May 13 08:23:20.443986 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:20.444060 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 08:23:20.467062 jq[1243]: true May 13 08:23:20.448882 systemd[1]: Starting update-engine.service... May 13 08:23:20.451413 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 08:23:20.455307 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 08:23:20.461832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 08:23:20.462089 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 08:23:20.463111 systemd[1]: Reached target network-online.target. May 13 08:23:20.470023 systemd[1]: Starting kubelet.service... May 13 08:23:20.471325 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 08:23:20.471693 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 08:23:20.516266 extend-filesystems[1232]: Found loop1 May 13 08:23:20.516266 extend-filesystems[1232]: Found vda May 13 08:23:20.516266 extend-filesystems[1232]: Found vda1 May 13 08:23:20.516266 extend-filesystems[1232]: Found vda2 May 13 08:23:20.516266 extend-filesystems[1232]: Found vda3 May 13 08:23:20.516266 extend-filesystems[1232]: Found usr May 13 08:23:20.516266 extend-filesystems[1232]: Found vda4 May 13 08:23:20.516266 extend-filesystems[1232]: Found vda6 May 13 08:23:20.545781 extend-filesystems[1232]: Found vda7 May 13 08:23:20.545781 extend-filesystems[1232]: Found vda9 May 13 08:23:20.545781 extend-filesystems[1232]: Checking size of /dev/vda9 May 13 08:23:20.556206 tar[1247]: linux-amd64/helm May 13 08:23:20.518935 dbus-daemon[1230]: [system] SELinux support is enabled May 13 08:23:20.519139 systemd[1]: Started dbus.service. May 13 08:23:20.557172 jq[1254]: true May 13 08:23:20.521896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 08:23:20.521922 systemd[1]: Reached target system-config.target. May 13 08:23:20.522433 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 08:23:20.570067 extend-filesystems[1232]: Resized partition /dev/vda9 May 13 08:23:20.522452 systemd[1]: Reached target user-config.target. May 13 08:23:20.541765 systemd[1]: motdgen.service: Deactivated successfully. May 13 08:23:20.574342 extend-filesystems[1282]: resize2fs 1.46.5 (30-Dec-2021) May 13 08:23:20.542013 systemd[1]: Finished motdgen.service. May 13 08:23:20.578861 systemd-timesyncd[1177]: Contacted time server 45.55.58.103:123 (0.flatcar.pool.ntp.org). May 13 08:23:20.578942 systemd-timesyncd[1177]: Initial clock synchronization to Tue 2025-05-13 08:23:20.864624 UTC. May 13 08:23:20.606388 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 08:23:20.610603 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 08:23:20.653727 env[1260]: time="2025-05-13T08:23:20.651637733Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 08:23:20.654011 extend-filesystems[1282]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 08:23:20.654011 extend-filesystems[1282]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 08:23:20.654011 extend-filesystems[1282]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 08:23:20.664673 extend-filesystems[1232]: Resized filesystem in /dev/vda9 May 13 08:23:20.654809 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 08:23:20.655095 systemd[1]: Finished extend-filesystems.service. May 13 08:23:20.689105 bash[1293]: Updated "/home/core/.ssh/authorized_keys" May 13 08:23:20.688282 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 08:23:20.690637 update_engine[1242]: I0513 08:23:20.689020 1242 main.cc:92] Flatcar Update Engine starting May 13 08:23:20.694849 systemd-logind[1240]: Watching system buttons on /dev/input/event1 (Power Button) May 13 08:23:20.694874 systemd-logind[1240]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 08:23:20.706476 update_engine[1242]: I0513 08:23:20.706121 1242 update_check_scheduler.cc:74] Next update check in 3m54s May 13 08:23:20.697747 systemd-logind[1240]: New seat seat0. May 13 08:23:20.701971 systemd[1]: Started update-engine.service. May 13 08:23:20.704263 systemd[1]: Started locksmithd.service. May 13 08:23:20.705077 systemd[1]: Started systemd-logind.service. May 13 08:23:20.724323 env[1260]: time="2025-05-13T08:23:20.724280535Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 08:23:20.724593 env[1260]: time="2025-05-13T08:23:20.724556753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.726262 env[1260]: time="2025-05-13T08:23:20.726232345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:20.727994 env[1260]: time="2025-05-13T08:23:20.727973882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.728317 env[1260]: time="2025-05-13T08:23:20.728292910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:20.728391 env[1260]: time="2025-05-13T08:23:20.728374613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.728459 env[1260]: time="2025-05-13T08:23:20.728442080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 08:23:20.728523 env[1260]: time="2025-05-13T08:23:20.728508304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.730389 env[1260]: time="2025-05-13T08:23:20.730368062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.730730 env[1260]: time="2025-05-13T08:23:20.730710013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:20.732687 env[1260]: time="2025-05-13T08:23:20.732661623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:20.732760 env[1260]: time="2025-05-13T08:23:20.732745000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 08:23:20.732873 env[1260]: time="2025-05-13T08:23:20.732853924Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 08:23:20.734021 env[1260]: time="2025-05-13T08:23:20.734002859Z" level=info msg="metadata content store policy set" policy=shared May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749184041Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749227823Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749243974Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749280502Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749298556Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749314636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749329234Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749345925Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749360953Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749378586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749418260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749434090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749544587Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 08:23:20.751254 env[1260]: time="2025-05-13T08:23:20.749656858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750008738Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750037742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750052871Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750098236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750113915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750128523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750141036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750154431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750168397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750181432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750193885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750209184Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750341111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750361900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 08:23:20.751692 env[1260]: time="2025-05-13T08:23:20.750376768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 08:23:20.752042 env[1260]: time="2025-05-13T08:23:20.750390464Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 08:23:20.752042 env[1260]: time="2025-05-13T08:23:20.750407336Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 08:23:20.752042 env[1260]: time="2025-05-13T08:23:20.750421933Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 08:23:20.752042 env[1260]: time="2025-05-13T08:23:20.750442231Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 08:23:20.752042 env[1260]: time="2025-05-13T08:23:20.750481154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 08:23:20.752154 env[1260]: time="2025-05-13T08:23:20.750703220Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 08:23:20.752154 env[1260]: time="2025-05-13T08:23:20.750770286Z" level=info msg="Connect containerd service" May 13 08:23:20.752154 env[1260]: time="2025-05-13T08:23:20.750806414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.752960764Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754659811Z" level=info msg="Start subscribing containerd event" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754713251Z" level=info msg="Start recovering state" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754784595Z" level=info msg="Start event monitor" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754808670Z" level=info msg="Start snapshots syncer" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754825241Z" level=info msg="Start cni network conf syncer for default" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.754834659Z" level=info msg="Start streaming server" May 13 08:23:20.755715 env[1260]: time="2025-05-13T08:23:20.755087593Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 08:23:20.757055 env[1260]: time="2025-05-13T08:23:20.757037440Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 08:23:20.761943 systemd[1]: Started containerd.service. May 13 08:23:20.786509 env[1260]: time="2025-05-13T08:23:20.786469440Z" level=info msg="containerd successfully booted in 0.148955s" May 13 08:23:21.150172 tar[1247]: linux-amd64/LICENSE May 13 08:23:21.150172 tar[1247]: linux-amd64/README.md May 13 08:23:21.156085 systemd[1]: Finished prepare-helm.service. May 13 08:23:21.363002 locksmithd[1299]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 08:23:21.673583 systemd[1]: Created slice system-sshd.slice. May 13 08:23:22.261729 systemd[1]: Started kubelet.service. May 13 08:23:22.847931 sshd_keygen[1266]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 08:23:22.884687 systemd[1]: Finished sshd-keygen.service. May 13 08:23:22.886940 systemd[1]: Starting issuegen.service... May 13 08:23:22.894506 systemd[1]: Started sshd@0-172.24.4.25:22-172.24.4.1:41806.service. May 13 08:23:22.900972 systemd[1]: issuegen.service: Deactivated successfully. May 13 08:23:22.901236 systemd[1]: Finished issuegen.service. May 13 08:23:22.903444 systemd[1]: Starting systemd-user-sessions.service... May 13 08:23:22.914156 systemd[1]: Finished systemd-user-sessions.service. May 13 08:23:22.916292 systemd[1]: Started getty@tty1.service. May 13 08:23:22.918055 systemd[1]: Started serial-getty@ttyS0.service. May 13 08:23:22.920098 systemd[1]: Reached target getty.target. May 13 08:23:23.624877 kubelet[1316]: E0513 08:23:23.624748 1316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:23.626537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:23.626737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:23.837739 sshd[1332]: Accepted publickey for core from 172.24.4.1 port 41806 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:23.842527 sshd[1332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:23.869278 systemd[1]: Created slice user-500.slice. May 13 08:23:23.873326 systemd[1]: Starting user-runtime-dir@500.service... May 13 08:23:23.882774 systemd-logind[1240]: New session 1 of user core. May 13 08:23:23.903175 systemd[1]: Finished user-runtime-dir@500.service. May 13 08:23:23.909584 systemd[1]: Starting user@500.service... May 13 08:23:23.922292 (systemd)[1346]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:24.057511 systemd[1346]: Queued start job for default target default.target. May 13 08:23:24.057778 systemd[1346]: Reached target paths.target. May 13 08:23:24.057798 systemd[1346]: Reached target sockets.target. May 13 08:23:24.057830 systemd[1346]: Reached target timers.target. May 13 08:23:24.057845 systemd[1346]: Reached target basic.target. May 13 08:23:24.057890 systemd[1346]: Reached target default.target. May 13 08:23:24.057920 systemd[1346]: Startup finished in 121ms. May 13 08:23:24.059231 systemd[1]: Started user@500.service. May 13 08:23:24.063250 systemd[1]: Started session-1.scope. May 13 08:23:24.556407 systemd[1]: Started sshd@1-172.24.4.25:22-172.24.4.1:55030.service. May 13 08:23:26.188405 sshd[1355]: Accepted publickey for core from 172.24.4.1 port 55030 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:26.194049 sshd[1355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:26.205824 systemd-logind[1240]: New session 2 of user core. May 13 08:23:26.206678 systemd[1]: Started session-2.scope. May 13 08:23:26.838269 sshd[1355]: pam_unix(sshd:session): session closed for user core May 13 08:23:26.840217 systemd[1]: Started sshd@2-172.24.4.25:22-172.24.4.1:55046.service. May 13 08:23:26.848191 systemd[1]: sshd@1-172.24.4.25:22-172.24.4.1:55030.service: Deactivated successfully. May 13 08:23:26.851440 systemd[1]: session-2.scope: Deactivated successfully. May 13 08:23:26.852855 systemd-logind[1240]: Session 2 logged out. Waiting for processes to exit. May 13 08:23:26.855734 systemd-logind[1240]: Removed session 2. May 13 08:23:27.706193 coreos-metadata[1226]: May 13 08:23:27.706 WARN failed to locate config-drive, using the metadata service API instead May 13 08:23:27.805787 coreos-metadata[1226]: May 13 08:23:27.805 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 08:23:28.021038 sshd[1360]: Accepted publickey for core from 172.24.4.1 port 55046 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:28.023240 sshd[1360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:28.033733 systemd-logind[1240]: New session 3 of user core. May 13 08:23:28.034289 systemd[1]: Started session-3.scope. May 13 08:23:28.086361 coreos-metadata[1226]: May 13 08:23:28.086 INFO Fetch successful May 13 08:23:28.086361 coreos-metadata[1226]: May 13 08:23:28.086 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 08:23:28.100075 coreos-metadata[1226]: May 13 08:23:28.099 INFO Fetch successful May 13 08:23:28.106358 unknown[1226]: wrote ssh authorized keys file for user: core May 13 08:23:28.144618 update-ssh-keys[1369]: Updated "/home/core/.ssh/authorized_keys" May 13 08:23:28.145550 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 13 08:23:28.146411 systemd[1]: Reached target multi-user.target. May 13 08:23:28.149562 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 08:23:28.172150 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 08:23:28.172739 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 08:23:28.173191 systemd[1]: Startup finished in 9.404s (kernel) + 14.109s (userspace) = 23.514s. May 13 08:23:28.666186 sshd[1360]: pam_unix(sshd:session): session closed for user core May 13 08:23:28.672517 systemd-logind[1240]: Session 3 logged out. Waiting for processes to exit. May 13 08:23:28.673750 systemd[1]: sshd@2-172.24.4.25:22-172.24.4.1:55046.service: Deactivated successfully. May 13 08:23:28.675526 systemd[1]: session-3.scope: Deactivated successfully. May 13 08:23:28.678910 systemd-logind[1240]: Removed session 3. May 13 08:23:33.879357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 08:23:33.879860 systemd[1]: Stopped kubelet.service. May 13 08:23:33.882945 systemd[1]: Starting kubelet.service... May 13 08:23:34.176128 systemd[1]: Started kubelet.service. May 13 08:23:34.403662 kubelet[1385]: E0513 08:23:34.403472 1385 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:34.411393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:34.411819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:38.750208 systemd[1]: Started sshd@3-172.24.4.25:22-172.24.4.1:49736.service. May 13 08:23:40.033301 sshd[1393]: Accepted publickey for core from 172.24.4.1 port 49736 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:40.036837 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:40.047008 systemd-logind[1240]: New session 4 of user core. May 13 08:23:40.047765 systemd[1]: Started session-4.scope. May 13 08:23:40.818708 sshd[1393]: pam_unix(sshd:session): session closed for user core May 13 08:23:40.823007 systemd[1]: Started sshd@4-172.24.4.25:22-172.24.4.1:49742.service. May 13 08:23:40.829413 systemd[1]: sshd@3-172.24.4.25:22-172.24.4.1:49736.service: Deactivated successfully. May 13 08:23:40.832324 systemd[1]: session-4.scope: Deactivated successfully. May 13 08:23:40.833137 systemd-logind[1240]: Session 4 logged out. Waiting for processes to exit. May 13 08:23:40.836247 systemd-logind[1240]: Removed session 4. May 13 08:23:42.057780 sshd[1398]: Accepted publickey for core from 172.24.4.1 port 49742 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:42.061089 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:42.071450 systemd-logind[1240]: New session 5 of user core. May 13 08:23:42.072164 systemd[1]: Started session-5.scope. May 13 08:23:42.701389 sshd[1398]: pam_unix(sshd:session): session closed for user core May 13 08:23:42.706245 systemd[1]: Started sshd@5-172.24.4.25:22-172.24.4.1:49744.service. May 13 08:23:42.713367 systemd[1]: sshd@4-172.24.4.25:22-172.24.4.1:49742.service: Deactivated successfully. May 13 08:23:42.718043 systemd-logind[1240]: Session 5 logged out. Waiting for processes to exit. May 13 08:23:42.718184 systemd[1]: session-5.scope: Deactivated successfully. May 13 08:23:42.724357 systemd-logind[1240]: Removed session 5. May 13 08:23:43.928181 sshd[1405]: Accepted publickey for core from 172.24.4.1 port 49744 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:43.931710 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:43.942096 systemd-logind[1240]: New session 6 of user core. May 13 08:23:43.942924 systemd[1]: Started session-6.scope. May 13 08:23:44.515414 sshd[1405]: pam_unix(sshd:session): session closed for user core May 13 08:23:44.517329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 08:23:44.517714 systemd[1]: Stopped kubelet.service. May 13 08:23:44.521033 systemd[1]: Starting kubelet.service... May 13 08:23:44.523822 systemd[1]: Started sshd@6-172.24.4.25:22-172.24.4.1:46746.service. May 13 08:23:44.530124 systemd[1]: sshd@5-172.24.4.25:22-172.24.4.1:49744.service: Deactivated successfully. May 13 08:23:44.543221 systemd[1]: session-6.scope: Deactivated successfully. May 13 08:23:44.543734 systemd-logind[1240]: Session 6 logged out. Waiting for processes to exit. May 13 08:23:44.551634 systemd-logind[1240]: Removed session 6. May 13 08:23:44.828961 systemd[1]: Started kubelet.service. May 13 08:23:44.896875 kubelet[1423]: E0513 08:23:44.896833 1423 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:44.898779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:44.898945 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:46.037652 sshd[1413]: Accepted publickey for core from 172.24.4.1 port 46746 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:46.040339 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:46.050910 systemd-logind[1240]: New session 7 of user core. May 13 08:23:46.051715 systemd[1]: Started session-7.scope. May 13 08:23:46.542262 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 08:23:46.543520 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:23:46.557982 dbus-daemon[1230]: \xd0\u000dȃ\xafU: received setenforce notice (enforcing=631669200) May 13 08:23:46.562065 sudo[1434]: pam_unix(sudo:session): session closed for user root May 13 08:23:46.827478 sshd[1413]: pam_unix(sshd:session): session closed for user core May 13 08:23:46.832485 systemd[1]: Started sshd@7-172.24.4.25:22-172.24.4.1:46756.service. May 13 08:23:46.838994 systemd[1]: sshd@6-172.24.4.25:22-172.24.4.1:46746.service: Deactivated successfully. May 13 08:23:46.841778 systemd[1]: session-7.scope: Deactivated successfully. May 13 08:23:46.842788 systemd-logind[1240]: Session 7 logged out. Waiting for processes to exit. May 13 08:23:46.845666 systemd-logind[1240]: Removed session 7. May 13 08:23:48.326331 sshd[1436]: Accepted publickey for core from 172.24.4.1 port 46756 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:48.329135 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:48.339895 systemd-logind[1240]: New session 8 of user core. May 13 08:23:48.340506 systemd[1]: Started session-8.scope. May 13 08:23:48.758369 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 08:23:48.758939 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:23:48.765615 sudo[1443]: pam_unix(sudo:session): session closed for user root May 13 08:23:48.776790 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 08:23:48.777942 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:23:48.810425 systemd[1]: Stopping audit-rules.service... May 13 08:23:48.810000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 13 08:23:48.815406 kernel: kauditd_printk_skb: 147 callbacks suppressed May 13 08:23:48.815514 kernel: audit: type=1305 audit(1747124628.810:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 13 08:23:48.810000 audit[1446]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe761d8db0 a2=420 a3=0 items=0 ppid=1 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:48.822836 auditctl[1446]: No rules May 13 08:23:48.837726 kernel: audit: type=1300 audit(1747124628.810:157): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe761d8db0 a2=420 a3=0 items=0 ppid=1 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:48.838200 systemd[1]: audit-rules.service: Deactivated successfully. May 13 08:23:48.838805 systemd[1]: Stopped audit-rules.service. May 13 08:23:48.810000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 13 08:23:48.842876 systemd[1]: Starting audit-rules.service... May 13 08:23:48.848628 kernel: audit: type=1327 audit(1747124628.810:157): proctitle=2F7362696E2F617564697463746C002D44 May 13 08:23:48.853933 kernel: audit: type=1131 audit(1747124628.837:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:48.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:48.892209 augenrules[1464]: No rules May 13 08:23:48.894304 systemd[1]: Finished audit-rules.service. May 13 08:23:48.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:48.897261 sudo[1442]: pam_unix(sudo:session): session closed for user root May 13 08:23:48.895000 audit[1442]: USER_END pid=1442 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:48.917373 kernel: audit: type=1130 audit(1747124628.893:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:48.917540 kernel: audit: type=1106 audit(1747124628.895:160): pid=1442 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:48.917728 kernel: audit: type=1104 audit(1747124628.895:161): pid=1442 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:48.895000 audit[1442]: CRED_DISP pid=1442 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:49.059630 sshd[1436]: pam_unix(sshd:session): session closed for user core May 13 08:23:49.065650 systemd[1]: Started sshd@8-172.24.4.25:22-172.24.4.1:46768.service. May 13 08:23:49.067519 systemd[1]: sshd@7-172.24.4.25:22-172.24.4.1:46756.service: Deactivated successfully. May 13 08:23:49.063000 audit[1436]: USER_END pid=1436 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:49.087465 kernel: audit: type=1106 audit(1747124629.063:162): pid=1436 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:49.086357 systemd-logind[1240]: Session 8 logged out. Waiting for processes to exit. May 13 08:23:49.086740 systemd[1]: session-8.scope: Deactivated successfully. May 13 08:23:49.063000 audit[1436]: CRED_DISP pid=1436 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:49.097412 systemd-logind[1240]: Removed session 8. May 13 08:23:49.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.25:22-172.24.4.1:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:49.122465 kernel: audit: type=1104 audit(1747124629.063:163): pid=1436 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:49.122566 kernel: audit: type=1130 audit(1747124629.064:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.25:22-172.24.4.1:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:49.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.25:22-172.24.4.1:46756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:50.344000 audit[1469]: USER_ACCT pid=1469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:50.346237 sshd[1469]: Accepted publickey for core from 172.24.4.1 port 46768 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:50.346000 audit[1469]: CRED_ACQ pid=1469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:50.346000 audit[1469]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda473e9b0 a2=3 a3=0 items=0 ppid=1 pid=1469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:50.346000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:23:50.349501 sshd[1469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:50.360084 systemd-logind[1240]: New session 9 of user core. May 13 08:23:50.360870 systemd[1]: Started session-9.scope. May 13 08:23:50.372000 audit[1469]: USER_START pid=1469 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:50.375000 audit[1474]: CRED_ACQ pid=1474 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:23:50.708000 audit[1475]: USER_ACCT pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:50.709193 sudo[1475]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 08:23:50.708000 audit[1475]: CRED_REFR pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:50.709773 sudo[1475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:23:50.713000 audit[1475]: USER_START pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:23:50.779330 systemd[1]: Starting docker.service... May 13 08:23:50.829168 env[1485]: time="2025-05-13T08:23:50.829093664Z" level=info msg="Starting up" May 13 08:23:50.832322 env[1485]: time="2025-05-13T08:23:50.832266107Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 08:23:50.832322 env[1485]: time="2025-05-13T08:23:50.832298391Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 08:23:50.832322 env[1485]: time="2025-05-13T08:23:50.832320098Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 08:23:50.834685 env[1485]: time="2025-05-13T08:23:50.832332922Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 08:23:50.836608 env[1485]: time="2025-05-13T08:23:50.836572648Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 08:23:50.836683 env[1485]: time="2025-05-13T08:23:50.836668910Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 08:23:50.836750 env[1485]: time="2025-05-13T08:23:50.836733981Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 08:23:50.836815 env[1485]: time="2025-05-13T08:23:50.836802761Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 08:23:50.852343 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2120936164-merged.mount: Deactivated successfully. May 13 08:23:51.088330 env[1485]: time="2025-05-13T08:23:51.086813324Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 08:23:51.088865 env[1485]: time="2025-05-13T08:23:51.088712609Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 08:23:51.089658 env[1485]: time="2025-05-13T08:23:51.089564960Z" level=info msg="Loading containers: start." May 13 08:23:51.218000 audit[1516]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.218000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc783a05e0 a2=0 a3=7ffc783a05cc items=0 ppid=1485 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.218000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 13 08:23:51.221000 audit[1518]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.221000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdf5389690 a2=0 a3=7ffdf538967c items=0 ppid=1485 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.221000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 13 08:23:51.223000 audit[1520]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.223000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd06e926a0 a2=0 a3=7ffd06e9268c items=0 ppid=1485 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.223000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 13 08:23:51.226000 audit[1522]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.226000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde41e5960 a2=0 a3=7ffde41e594c items=0 ppid=1485 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 13 08:23:51.229000 audit[1524]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.229000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffbe818c00 a2=0 a3=7fffbe818bec items=0 ppid=1485 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 13 08:23:51.259000 audit[1529]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.259000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff9f58bcc0 a2=0 a3=7fff9f58bcac items=0 ppid=1485 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.259000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 13 08:23:51.279000 audit[1531]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.279000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd13224400 a2=0 a3=7ffd132243ec items=0 ppid=1485 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.279000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 13 08:23:51.284000 audit[1533]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.284000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc396d5ad0 a2=0 a3=7ffc396d5abc items=0 ppid=1485 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 13 08:23:51.288000 audit[1535]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.288000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd88ecef00 a2=0 a3=7ffd88eceeec items=0 ppid=1485 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.288000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 08:23:51.306000 audit[1539]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.306000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc39fe4430 a2=0 a3=7ffc39fe441c items=0 ppid=1485 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.306000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 13 08:23:51.312000 audit[1540]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.312000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd7157f510 a2=0 a3=7ffd7157f4fc items=0 ppid=1485 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 08:23:51.340637 kernel: Initializing XFRM netlink socket May 13 08:23:51.433313 env[1485]: time="2025-05-13T08:23:51.433248393Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 08:23:51.478000 audit[1548]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.478000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffce8030ba0 a2=0 a3=7ffce8030b8c items=0 ppid=1485 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.478000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 13 08:23:51.500000 audit[1551]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.500000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffdfe28120 a2=0 a3=7fffdfe2810c items=0 ppid=1485 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.500000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 13 08:23:51.506000 audit[1554]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.506000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdfecb1950 a2=0 a3=7ffdfecb193c items=0 ppid=1485 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.506000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 13 08:23:51.508000 audit[1556]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.508000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdedad5ce0 a2=0 a3=7ffdedad5ccc items=0 ppid=1485 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.508000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 13 08:23:51.510000 audit[1558]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.510000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fffec554290 a2=0 a3=7fffec55427c items=0 ppid=1485 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.510000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 13 08:23:51.512000 audit[1560]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.512000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff6fde4f30 a2=0 a3=7fff6fde4f1c items=0 ppid=1485 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.512000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 13 08:23:51.514000 audit[1562]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.514000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc32d06360 a2=0 a3=7ffc32d0634c items=0 ppid=1485 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.514000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 13 08:23:51.523000 audit[1565]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.523000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc86097460 a2=0 a3=7ffc8609744c items=0 ppid=1485 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 13 08:23:51.526000 audit[1567]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.526000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffea08b1b00 a2=0 a3=7ffea08b1aec items=0 ppid=1485 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.526000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 13 08:23:51.528000 audit[1569]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.528000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe40575220 a2=0 a3=7ffe4057520c items=0 ppid=1485 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.528000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 13 08:23:51.530000 audit[1571]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.530000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff9dcdae00 a2=0 a3=7fff9dcdadec items=0 ppid=1485 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.530000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 13 08:23:51.533101 systemd-networkd[1030]: docker0: Link UP May 13 08:23:51.544000 audit[1575]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.544000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd40ec75c0 a2=0 a3=7ffd40ec75ac items=0 ppid=1485 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.544000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 13 08:23:51.548000 audit[1576]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:23:51.548000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff4a2f30b0 a2=0 a3=7fff4a2f309c items=0 ppid=1485 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:51.548000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 13 08:23:51.550882 env[1485]: time="2025-05-13T08:23:51.550859046Z" level=info msg="Loading containers: done." May 13 08:23:51.582854 env[1485]: time="2025-05-13T08:23:51.582807514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 08:23:51.583232 env[1485]: time="2025-05-13T08:23:51.583216056Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 08:23:51.583421 env[1485]: time="2025-05-13T08:23:51.583407138Z" level=info msg="Daemon has completed initialization" May 13 08:23:51.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:51.611758 systemd[1]: Started docker.service. May 13 08:23:51.621242 env[1485]: time="2025-05-13T08:23:51.621170787Z" level=info msg="API listen on /run/docker.sock" May 13 08:23:53.566283 env[1260]: time="2025-05-13T08:23:53.566202901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 08:23:54.327518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450667303.mount: Deactivated successfully. May 13 08:23:55.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.122206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 08:23:55.122410 systemd[1]: Stopped kubelet.service. May 13 08:23:55.123842 systemd[1]: Starting kubelet.service... May 13 08:23:55.127401 kernel: kauditd_printk_skb: 84 callbacks suppressed May 13 08:23:55.127469 kernel: audit: type=1130 audit(1747124635.120:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.142603 kernel: audit: type=1131 audit(1747124635.120:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.209494 systemd[1]: Started kubelet.service. May 13 08:23:55.216651 kernel: audit: type=1130 audit(1747124635.208:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:55.538947 kubelet[1622]: E0513 08:23:55.538016 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:55.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:23:55.542555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:55.542936 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:55.549708 kernel: audit: type=1131 audit(1747124635.542:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:23:57.593521 env[1260]: time="2025-05-13T08:23:57.592227637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:23:57.598213 env[1260]: time="2025-05-13T08:23:57.596796864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:23:57.600786 env[1260]: time="2025-05-13T08:23:57.600733455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:23:57.604601 env[1260]: time="2025-05-13T08:23:57.603910692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:23:57.606649 env[1260]: time="2025-05-13T08:23:57.605607909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 08:23:57.623416 env[1260]: time="2025-05-13T08:23:57.623355867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 08:24:00.615258 env[1260]: time="2025-05-13T08:24:00.615163316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:00.618277 env[1260]: time="2025-05-13T08:24:00.618222755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:00.622405 env[1260]: time="2025-05-13T08:24:00.622359379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:00.629311 env[1260]: time="2025-05-13T08:24:00.629245887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 08:24:00.629729 env[1260]: time="2025-05-13T08:24:00.629658206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:00.657670 env[1260]: time="2025-05-13T08:24:00.657550698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 08:24:02.717151 env[1260]: time="2025-05-13T08:24:02.717036045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.721147 env[1260]: time="2025-05-13T08:24:02.721094414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.727205 env[1260]: time="2025-05-13T08:24:02.727154734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.732024 env[1260]: time="2025-05-13T08:24:02.731933977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.734422 env[1260]: time="2025-05-13T08:24:02.734362109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 08:24:02.762122 env[1260]: time="2025-05-13T08:24:02.762058085Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 08:24:04.336227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478080739.mount: Deactivated successfully. May 13 08:24:05.362282 env[1260]: time="2025-05-13T08:24:05.362194170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.365325 env[1260]: time="2025-05-13T08:24:05.365258166Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.368561 env[1260]: time="2025-05-13T08:24:05.368503688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.371286 env[1260]: time="2025-05-13T08:24:05.371231379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.372659 env[1260]: time="2025-05-13T08:24:05.372542825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 08:24:05.393004 env[1260]: time="2025-05-13T08:24:05.392941282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 08:24:05.622862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 08:24:05.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:05.640706 kernel: audit: type=1130 audit(1747124645.622:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:05.623451 systemd[1]: Stopped kubelet.service. May 13 08:24:05.627824 systemd[1]: Starting kubelet.service... May 13 08:24:05.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:05.659653 kernel: audit: type=1131 audit(1747124645.622:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:05.672797 update_engine[1242]: I0513 08:24:05.672701 1242 update_attempter.cc:509] Updating boot flags... May 13 08:24:06.015283 systemd[1]: Started kubelet.service. May 13 08:24:06.036663 kernel: audit: type=1130 audit(1747124646.014:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:06.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:06.155394 kubelet[1672]: E0513 08:24:06.155357 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:24:06.158551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:24:06.158861 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:24:06.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:24:06.165642 kernel: audit: type=1131 audit(1747124646.158:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:24:06.621814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902010236.mount: Deactivated successfully. May 13 08:24:09.180773 env[1260]: time="2025-05-13T08:24:09.180615393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.184740 env[1260]: time="2025-05-13T08:24:09.184673576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.191109 env[1260]: time="2025-05-13T08:24:09.191010974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.197374 env[1260]: time="2025-05-13T08:24:09.197259622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.198440 env[1260]: time="2025-05-13T08:24:09.198329712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 08:24:09.220767 env[1260]: time="2025-05-13T08:24:09.220706141Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 08:24:09.900774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958375694.mount: Deactivated successfully. May 13 08:24:09.918916 env[1260]: time="2025-05-13T08:24:09.918843910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.922247 env[1260]: time="2025-05-13T08:24:09.922189014Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.925648 env[1260]: time="2025-05-13T08:24:09.925544161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.928999 env[1260]: time="2025-05-13T08:24:09.928946838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:09.930331 env[1260]: time="2025-05-13T08:24:09.930273523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 08:24:09.955530 env[1260]: time="2025-05-13T08:24:09.955394107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 08:24:10.721648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170674309.mount: Deactivated successfully. May 13 08:24:15.209688 env[1260]: time="2025-05-13T08:24:15.208733656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.213759 env[1260]: time="2025-05-13T08:24:15.213676946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.219844 env[1260]: time="2025-05-13T08:24:15.219777140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.227485 env[1260]: time="2025-05-13T08:24:15.227422872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 08:24:15.227768 env[1260]: time="2025-05-13T08:24:15.224994336Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:16.372980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 08:24:16.373453 systemd[1]: Stopped kubelet.service. May 13 08:24:16.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.376943 systemd[1]: Starting kubelet.service... May 13 08:24:16.384743 kernel: audit: type=1130 audit(1747124656.371:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.384801 kernel: audit: type=1131 audit(1747124656.371:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.510030 systemd[1]: Started kubelet.service. May 13 08:24:16.518606 kernel: audit: type=1130 audit(1747124656.508:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:16.585681 kubelet[1766]: E0513 08:24:16.585647 1766 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:24:16.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:24:16.590850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:24:16.591010 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:24:16.596624 kernel: audit: type=1131 audit(1747124656.589:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 13 08:24:19.120934 systemd[1]: Stopped kubelet.service. May 13 08:24:19.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.124395 systemd[1]: Starting kubelet.service... May 13 08:24:19.128725 kernel: audit: type=1130 audit(1747124659.120:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.136676 kernel: audit: type=1131 audit(1747124659.121:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.154632 systemd[1]: Reloading. May 13 08:24:19.244721 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2025-05-13T08:24:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:24:19.250658 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2025-05-13T08:24:19Z" level=info msg="torcx already run" May 13 08:24:19.370723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:24:19.370935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:24:19.394881 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:24:19.494238 systemd[1]: Started kubelet.service. May 13 08:24:19.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.501629 kernel: audit: type=1130 audit(1747124659.494:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.508525 systemd[1]: Stopping kubelet.service... May 13 08:24:19.516669 kernel: audit: type=1131 audit(1747124659.509:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.509931 systemd[1]: kubelet.service: Deactivated successfully. May 13 08:24:19.510145 systemd[1]: Stopped kubelet.service. May 13 08:24:19.517561 systemd[1]: Starting kubelet.service... May 13 08:24:19.612046 kernel: audit: type=1130 audit(1747124659.603:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:19.603993 systemd[1]: Started kubelet.service. May 13 08:24:19.680441 kubelet[1873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:19.680441 kubelet[1873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 08:24:19.680441 kubelet[1873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:19.680441 kubelet[1873]: I0513 08:24:19.679928 1873 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 08:24:20.088731 kubelet[1873]: I0513 08:24:20.088083 1873 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 08:24:20.089131 kubelet[1873]: I0513 08:24:20.089106 1873 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 08:24:20.089752 kubelet[1873]: I0513 08:24:20.089722 1873 server.go:927] "Client rotation is on, will bootstrap in background" May 13 08:24:20.555494 kubelet[1873]: I0513 08:24:20.555361 1873 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:24:20.586982 kubelet[1873]: E0513 08:24:20.586936 1873 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:20.719330 kubelet[1873]: I0513 08:24:20.719266 1873 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 08:24:20.721193 kubelet[1873]: I0513 08:24:20.721132 1873 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 08:24:20.722191 kubelet[1873]: I0513 08:24:20.721427 1873 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-f896a7891b.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 08:24:20.731088 kubelet[1873]: I0513 08:24:20.731040 1873 topology_manager.go:138] "Creating topology manager with none policy" May 13 08:24:20.731088 kubelet[1873]: I0513 08:24:20.731094 1873 container_manager_linux.go:301] "Creating device plugin manager" May 13 08:24:20.731551 kubelet[1873]: I0513 08:24:20.731519 1873 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:20.749705 kubelet[1873]: I0513 08:24:20.749666 1873 kubelet.go:400] "Attempting to sync node with API server" May 13 08:24:20.749844 kubelet[1873]: I0513 08:24:20.749724 1873 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 08:24:20.749844 kubelet[1873]: I0513 08:24:20.749776 1873 kubelet.go:312] "Adding apiserver pod source" May 13 08:24:20.749968 kubelet[1873]: I0513 08:24:20.749850 1873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 08:24:20.958420 kubelet[1873]: W0513 08:24:20.958247 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:20.958420 kubelet[1873]: E0513 08:24:20.958411 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:20.960049 kubelet[1873]: W0513 08:24:20.959935 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-f896a7891b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:20.960049 kubelet[1873]: E0513 08:24:20.960074 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-f896a7891b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:20.960421 kubelet[1873]: I0513 08:24:20.960367 1873 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 08:24:21.033722 kubelet[1873]: I0513 08:24:21.033612 1873 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 08:24:21.033990 kubelet[1873]: W0513 08:24:21.033878 1873 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 08:24:21.036057 kubelet[1873]: I0513 08:24:21.035988 1873 server.go:1264] "Started kubelet" May 13 08:24:21.099000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:21.116663 kernel: audit: type=1400 audit(1747124661.099:216): avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:21.116779 kubelet[1873]: I0513 08:24:21.101315 1873 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 13 08:24:21.116779 kubelet[1873]: I0513 08:24:21.101473 1873 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 13 08:24:21.116779 kubelet[1873]: I0513 08:24:21.101789 1873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 08:24:21.116779 kubelet[1873]: I0513 08:24:21.116182 1873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 08:24:21.119293 kubelet[1873]: I0513 08:24:21.119237 1873 server.go:455] "Adding debug handlers to kubelet server" May 13 08:24:21.122423 kubelet[1873]: I0513 08:24:21.122303 1873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 08:24:21.122916 kubelet[1873]: I0513 08:24:21.122860 1873 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 08:24:21.099000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:21.099000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005a5860 a1=c000725cf8 a2=c0005a5620 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:21.099000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:21.099000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:21.099000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0000e4ea0 a1=c000725d10 a2=c0005a5920 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:21.108000 audit[1884]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.108000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcfa759110 a2=0 a3=7ffcfa7590fc items=0 ppid=1873 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 13 08:24:21.111000 audit[1885]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.111000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff057b7c60 a2=0 a3=7fff057b7c4c items=0 ppid=1873 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 13 08:24:21.127975 kubelet[1873]: I0513 08:24:21.127923 1873 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 08:24:21.130000 audit[1887]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.130000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd5b248b0 a2=0 a3=7fffd5b2489c items=0 ppid=1873 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 08:24:21.134882 kubelet[1873]: I0513 08:24:21.134812 1873 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 08:24:21.135182 kubelet[1873]: I0513 08:24:21.135123 1873 reconciler.go:26] "Reconciler: start to sync state" May 13 08:24:21.135814 kubelet[1873]: E0513 08:24:21.135528 1873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.25:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-n-f896a7891b.novalocal.183f089f5f5ea100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-n-f896a7891b.novalocal,UID:ci-3510-3-7-n-f896a7891b.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-n-f896a7891b.novalocal,},FirstTimestamp:2025-05-13 08:24:21.035901184 +0000 UTC m=+1.420430258,LastTimestamp:2025-05-13 08:24:21.035901184 +0000 UTC m=+1.420430258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-n-f896a7891b.novalocal,}" May 13 08:24:21.136570 kubelet[1873]: E0513 08:24:21.136503 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-f896a7891b.novalocal?timeout=10s\": dial tcp 172.24.4.25:6443: connect: connection refused" interval="200ms" May 13 08:24:21.137453 kubelet[1873]: W0513 08:24:21.137323 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:21.137631 kubelet[1873]: E0513 08:24:21.137472 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:21.139000 audit[1889]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.139000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef07adcf0 a2=0 a3=7ffef07adcdc items=0 ppid=1873 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 08:24:21.141820 kubelet[1873]: I0513 08:24:21.141778 1873 factory.go:221] Registration of the systemd container factory successfully May 13 08:24:21.142087 kubelet[1873]: I0513 08:24:21.142034 1873 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 08:24:21.144630 kubelet[1873]: I0513 08:24:21.144560 1873 factory.go:221] Registration of the containerd container factory successfully May 13 08:24:21.171000 audit[1893]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.171000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc89616820 a2=0 a3=7ffc8961680c items=0 ppid=1873 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.171000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 13 08:24:21.173462 kubelet[1873]: I0513 08:24:21.173425 1873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 08:24:21.172000 audit[1894]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:21.172000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2c0bfa30 a2=0 a3=7ffc2c0bfa1c items=0 ppid=1873 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 13 08:24:21.174669 kubelet[1873]: I0513 08:24:21.174656 1873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 08:24:21.174800 kubelet[1873]: I0513 08:24:21.174788 1873 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 08:24:21.174891 kubelet[1873]: I0513 08:24:21.174880 1873 kubelet.go:2337] "Starting kubelet main sync loop" May 13 08:24:21.175057 kubelet[1873]: E0513 08:24:21.175039 1873 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 08:24:21.174000 audit[1895]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.174000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5bbd1960 a2=0 a3=7ffc5bbd194c items=0 ppid=1873 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 13 08:24:21.175000 audit[1896]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.175000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1916d490 a2=0 a3=7fff1916d47c items=0 ppid=1873 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 13 08:24:21.176000 audit[1897]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:21.176000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd748d190 a2=0 a3=7fffd748d17c items=0 ppid=1873 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 13 08:24:21.177000 audit[1898]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:21.177000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0c0119c0 a2=0 a3=7ffe0c0119ac items=0 ppid=1873 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 13 08:24:21.178000 audit[1899]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:21.178000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd3c8f2500 a2=0 a3=7ffd3c8f24ec items=0 ppid=1873 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.178000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 13 08:24:21.179000 audit[1900]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:21.179000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcf9338850 a2=0 a3=7ffcf933883c items=0 ppid=1873 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 13 08:24:21.182461 kubelet[1873]: W0513 08:24:21.182423 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:21.182568 kubelet[1873]: E0513 08:24:21.182557 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:21.192570 kubelet[1873]: I0513 08:24:21.192553 1873 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 08:24:21.192679 kubelet[1873]: I0513 08:24:21.192668 1873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 08:24:21.192784 kubelet[1873]: I0513 08:24:21.192774 1873 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:21.197439 kubelet[1873]: I0513 08:24:21.197422 1873 policy_none.go:49] "None policy: Start" May 13 08:24:21.198120 kubelet[1873]: I0513 08:24:21.198097 1873 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 08:24:21.198172 kubelet[1873]: I0513 08:24:21.198146 1873 state_mem.go:35] "Initializing new in-memory state store" May 13 08:24:21.206514 kubelet[1873]: I0513 08:24:21.206477 1873 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 08:24:21.204000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:21.204000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:21.204000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5be60 a1=c000ef4ea0 a2=c000b5be30 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:21.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:21.206831 kubelet[1873]: I0513 08:24:21.206563 1873 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 13 08:24:21.206831 kubelet[1873]: I0513 08:24:21.206741 1873 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 08:24:21.210402 kubelet[1873]: I0513 08:24:21.210286 1873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 08:24:21.213457 kubelet[1873]: E0513 08:24:21.213425 1873 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:21.229596 kubelet[1873]: I0513 08:24:21.229549 1873 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.229966 kubelet[1873]: E0513 08:24:21.229834 1873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.25:6443/api/v1/nodes\": dial tcp 172.24.4.25:6443: connect: connection refused" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.275809 kubelet[1873]: I0513 08:24:21.275752 1873 topology_manager.go:215] "Topology Admit Handler" podUID="91606a82f44dda6eecf294865a36aca8" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.278538 kubelet[1873]: I0513 08:24:21.278492 1873 topology_manager.go:215] "Topology Admit Handler" podUID="daa9a0cbd46f6464cf77bbe76398ab3b" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.281663 kubelet[1873]: I0513 08:24:21.281559 1873 topology_manager.go:215] "Topology Admit Handler" podUID="5f9a3d797445c4197125a3d1644b119b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.336868 kubelet[1873]: I0513 08:24:21.336810 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91606a82f44dda6eecf294865a36aca8-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"91606a82f44dda6eecf294865a36aca8\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.337256 kubelet[1873]: I0513 08:24:21.337193 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.337711 kubelet[1873]: E0513 08:24:21.337310 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-f896a7891b.novalocal?timeout=10s\": dial tcp 172.24.4.25:6443: connect: connection refused" interval="400ms" May 13 08:24:21.337898 kubelet[1873]: I0513 08:24:21.337663 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.338173 kubelet[1873]: I0513 08:24:21.338137 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.338451 kubelet[1873]: I0513 08:24:21.338409 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.338744 kubelet[1873]: I0513 08:24:21.338689 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.339007 kubelet[1873]: I0513 08:24:21.338972 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.339245 kubelet[1873]: I0513 08:24:21.339189 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.339500 kubelet[1873]: I0513 08:24:21.339465 1873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.433648 kubelet[1873]: I0513 08:24:21.433565 1873 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.434234 kubelet[1873]: E0513 08:24:21.434157 1873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.25:6443/api/v1/nodes\": dial tcp 172.24.4.25:6443: connect: connection refused" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.593003 env[1260]: time="2025-05-13T08:24:21.592283321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal,Uid:91606a82f44dda6eecf294865a36aca8,Namespace:kube-system,Attempt:0,}" May 13 08:24:21.597092 env[1260]: time="2025-05-13T08:24:21.596134693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal,Uid:5f9a3d797445c4197125a3d1644b119b,Namespace:kube-system,Attempt:0,}" May 13 08:24:21.597567 env[1260]: time="2025-05-13T08:24:21.597493028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal,Uid:daa9a0cbd46f6464cf77bbe76398ab3b,Namespace:kube-system,Attempt:0,}" May 13 08:24:21.739209 kubelet[1873]: E0513 08:24:21.739107 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-f896a7891b.novalocal?timeout=10s\": dial tcp 172.24.4.25:6443: connect: connection refused" interval="800ms" May 13 08:24:21.837679 kubelet[1873]: I0513 08:24:21.837171 1873 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:21.837973 kubelet[1873]: E0513 08:24:21.837921 1873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.25:6443/api/v1/nodes\": dial tcp 172.24.4.25:6443: connect: connection refused" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:22.194546 kubelet[1873]: W0513 08:24:22.194374 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.195089 kubelet[1873]: E0513 08:24:22.195031 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.233028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424562008.mount: Deactivated successfully. May 13 08:24:22.256122 env[1260]: time="2025-05-13T08:24:22.256041628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.262643 env[1260]: time="2025-05-13T08:24:22.262531727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.271504 env[1260]: time="2025-05-13T08:24:22.271362061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.274849 env[1260]: time="2025-05-13T08:24:22.274755586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.284307 env[1260]: time="2025-05-13T08:24:22.284251657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.290257 env[1260]: time="2025-05-13T08:24:22.290203677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.292233 env[1260]: time="2025-05-13T08:24:22.292179955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.295499 env[1260]: time="2025-05-13T08:24:22.295450070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.296696 env[1260]: time="2025-05-13T08:24:22.296645158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.297805 env[1260]: time="2025-05-13T08:24:22.297763280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.298787 env[1260]: time="2025-05-13T08:24:22.298736830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.304476 env[1260]: time="2025-05-13T08:24:22.304419826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:22.340094 env[1260]: time="2025-05-13T08:24:22.339948960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:22.340094 env[1260]: time="2025-05-13T08:24:22.340004653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:22.340456 env[1260]: time="2025-05-13T08:24:22.340020205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:22.340456 env[1260]: time="2025-05-13T08:24:22.340194337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7efeb2ca6820778a1f69ba5294600eb26ae9f633c656927824d46e8ee2cfe9e5 pid=1909 runtime=io.containerd.runc.v2 May 13 08:24:22.373784 env[1260]: time="2025-05-13T08:24:22.373233765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:22.373784 env[1260]: time="2025-05-13T08:24:22.373317434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:22.373784 env[1260]: time="2025-05-13T08:24:22.373350501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:22.373784 env[1260]: time="2025-05-13T08:24:22.373646590Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4208914a45a6866a16a02e3fc6e6587ef42f76d96e90e4e8f36df23100b4678c pid=1942 runtime=io.containerd.runc.v2 May 13 08:24:22.376319 env[1260]: time="2025-05-13T08:24:22.376258053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:22.376385 env[1260]: time="2025-05-13T08:24:22.376343515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:22.376413 env[1260]: time="2025-05-13T08:24:22.376373056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:22.376665 env[1260]: time="2025-05-13T08:24:22.376628483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa7381c2be3304edbf17a74cd7ee652bc318f66601d9efe3b181d84db5d3fb37 pid=1931 runtime=io.containerd.runc.v2 May 13 08:24:22.394288 kubelet[1873]: W0513 08:24:22.391487 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.394288 kubelet[1873]: E0513 08:24:22.391612 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.422087 kubelet[1873]: W0513 08:24:22.422001 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-f896a7891b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.422087 kubelet[1873]: E0513 08:24:22.422063 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-f896a7891b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.447301 env[1260]: time="2025-05-13T08:24:22.446346731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal,Uid:daa9a0cbd46f6464cf77bbe76398ab3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7efeb2ca6820778a1f69ba5294600eb26ae9f633c656927824d46e8ee2cfe9e5\"" May 13 08:24:22.452384 env[1260]: time="2025-05-13T08:24:22.452350276Z" level=info msg="CreateContainer within sandbox \"7efeb2ca6820778a1f69ba5294600eb26ae9f633c656927824d46e8ee2cfe9e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 08:24:22.480692 env[1260]: time="2025-05-13T08:24:22.480383967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal,Uid:91606a82f44dda6eecf294865a36aca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4208914a45a6866a16a02e3fc6e6587ef42f76d96e90e4e8f36df23100b4678c\"" May 13 08:24:22.489157 env[1260]: time="2025-05-13T08:24:22.489116342Z" level=info msg="CreateContainer within sandbox \"4208914a45a6866a16a02e3fc6e6587ef42f76d96e90e4e8f36df23100b4678c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 08:24:22.495085 env[1260]: time="2025-05-13T08:24:22.495050287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal,Uid:5f9a3d797445c4197125a3d1644b119b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa7381c2be3304edbf17a74cd7ee652bc318f66601d9efe3b181d84db5d3fb37\"" May 13 08:24:22.496099 env[1260]: time="2025-05-13T08:24:22.496073336Z" level=info msg="CreateContainer within sandbox \"7efeb2ca6820778a1f69ba5294600eb26ae9f633c656927824d46e8ee2cfe9e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa2b96ab5266914c4da1e4c84044465d264c493fe4f767c06755d3c573d11c31\"" May 13 08:24:22.496808 env[1260]: time="2025-05-13T08:24:22.496783463Z" level=info msg="StartContainer for \"aa2b96ab5266914c4da1e4c84044465d264c493fe4f767c06755d3c573d11c31\"" May 13 08:24:22.501924 env[1260]: time="2025-05-13T08:24:22.501889171Z" level=info msg="CreateContainer within sandbox \"fa7381c2be3304edbf17a74cd7ee652bc318f66601d9efe3b181d84db5d3fb37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 08:24:22.540145 kubelet[1873]: E0513 08:24:22.540097 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-f896a7891b.novalocal?timeout=10s\": dial tcp 172.24.4.25:6443: connect: connection refused" interval="1.6s" May 13 08:24:22.540376 env[1260]: time="2025-05-13T08:24:22.540122406Z" level=info msg="CreateContainer within sandbox \"4208914a45a6866a16a02e3fc6e6587ef42f76d96e90e4e8f36df23100b4678c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a7cfdce9d1c3aa500c869de9f62c5e00f16687c5802ba0b3961d76cff8eadae\"" May 13 08:24:22.541119 env[1260]: time="2025-05-13T08:24:22.541099213Z" level=info msg="StartContainer for \"3a7cfdce9d1c3aa500c869de9f62c5e00f16687c5802ba0b3961d76cff8eadae\"" May 13 08:24:22.549297 env[1260]: time="2025-05-13T08:24:22.549242096Z" level=info msg="CreateContainer within sandbox \"fa7381c2be3304edbf17a74cd7ee652bc318f66601d9efe3b181d84db5d3fb37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f1fb6a03e5b7db1add54457a8086a659f88ea86e249dceb9f6988bfe6c6c99a0\"" May 13 08:24:22.550158 env[1260]: time="2025-05-13T08:24:22.550134441Z" level=info msg="StartContainer for \"f1fb6a03e5b7db1add54457a8086a659f88ea86e249dceb9f6988bfe6c6c99a0\"" May 13 08:24:22.606989 kubelet[1873]: E0513 08:24:22.606856 1873 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.619887 env[1260]: time="2025-05-13T08:24:22.619842979Z" level=info msg="StartContainer for \"aa2b96ab5266914c4da1e4c84044465d264c493fe4f767c06755d3c573d11c31\" returns successfully" May 13 08:24:22.637663 kubelet[1873]: W0513 08:24:22.637604 1873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.637663 kubelet[1873]: E0513 08:24:22.637668 1873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.25:6443: connect: connection refused May 13 08:24:22.640083 kubelet[1873]: I0513 08:24:22.640054 1873 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:22.640391 kubelet[1873]: E0513 08:24:22.640365 1873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.25:6443/api/v1/nodes\": dial tcp 172.24.4.25:6443: connect: connection refused" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:22.670795 env[1260]: time="2025-05-13T08:24:22.670744291Z" level=info msg="StartContainer for \"f1fb6a03e5b7db1add54457a8086a659f88ea86e249dceb9f6988bfe6c6c99a0\" returns successfully" May 13 08:24:22.682817 env[1260]: time="2025-05-13T08:24:22.682753586Z" level=info msg="StartContainer for \"3a7cfdce9d1c3aa500c869de9f62c5e00f16687c5802ba0b3961d76cff8eadae\" returns successfully" May 13 08:24:24.242030 kubelet[1873]: I0513 08:24:24.242003 1873 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:24.391602 kubelet[1873]: E0513 08:24:24.391510 1873 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:24.456470 kubelet[1873]: E0513 08:24:24.456339 1873 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-7-n-f896a7891b.novalocal.183f089f5f5ea100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-n-f896a7891b.novalocal,UID:ci-3510-3-7-n-f896a7891b.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-n-f896a7891b.novalocal,},FirstTimestamp:2025-05-13 08:24:21.035901184 +0000 UTC m=+1.420430258,LastTimestamp:2025-05-13 08:24:21.035901184 +0000 UTC m=+1.420430258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-n-f896a7891b.novalocal,}" May 13 08:24:24.514649 kubelet[1873]: I0513 08:24:24.514545 1873 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:24.540970 kubelet[1873]: E0513 08:24:24.540938 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:24.641427 kubelet[1873]: E0513 08:24:24.641362 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:24.742324 kubelet[1873]: E0513 08:24:24.742298 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:24.843077 kubelet[1873]: E0513 08:24:24.842886 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:24.944606 kubelet[1873]: E0513 08:24:24.944505 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.045177 kubelet[1873]: E0513 08:24:25.045141 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.145536 kubelet[1873]: E0513 08:24:25.145496 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.246200 kubelet[1873]: E0513 08:24:25.246157 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.347556 kubelet[1873]: E0513 08:24:25.347515 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.448818 kubelet[1873]: E0513 08:24:25.448661 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.550144 kubelet[1873]: E0513 08:24:25.550089 1873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-f896a7891b.novalocal\" not found" May 13 08:24:25.890078 kubelet[1873]: I0513 08:24:25.890007 1873 apiserver.go:52] "Watching apiserver" May 13 08:24:25.935810 kubelet[1873]: I0513 08:24:25.935762 1873 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 08:24:27.855254 systemd[1]: Reloading. May 13 08:24:27.991472 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-05-13T08:24:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:24:27.991539 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-05-13T08:24:27Z" level=info msg="torcx already run" May 13 08:24:28.087623 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:24:28.087639 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:24:28.110243 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:24:28.219591 systemd[1]: Stopping kubelet.service... May 13 08:24:28.235082 systemd[1]: kubelet.service: Deactivated successfully. May 13 08:24:28.235313 systemd[1]: Stopped kubelet.service. May 13 08:24:28.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:28.236904 kernel: kauditd_printk_skb: 47 callbacks suppressed May 13 08:24:28.237002 kernel: audit: type=1131 audit(1747124668.233:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:28.244418 systemd[1]: Starting kubelet.service... May 13 08:24:28.484295 kernel: audit: type=1130 audit(1747124668.468:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:28.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:28.470298 systemd[1]: Started kubelet.service. May 13 08:24:28.596604 kubelet[2219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:28.596954 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 08:24:28.597010 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:28.597142 kubelet[2219]: I0513 08:24:28.597114 2219 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 08:24:28.605805 kubelet[2219]: I0513 08:24:28.605747 2219 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 08:24:28.605805 kubelet[2219]: I0513 08:24:28.605803 2219 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 08:24:28.606389 kubelet[2219]: I0513 08:24:28.606337 2219 server.go:927] "Client rotation is on, will bootstrap in background" May 13 08:24:28.614238 kubelet[2219]: I0513 08:24:28.614194 2219 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 08:24:28.620973 kubelet[2219]: I0513 08:24:28.619666 2219 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:24:28.645172 kubelet[2219]: I0513 08:24:28.645104 2219 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 08:24:28.646561 kubelet[2219]: I0513 08:24:28.646503 2219 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 08:24:28.647207 kubelet[2219]: I0513 08:24:28.646568 2219 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-f896a7891b.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 08:24:28.647322 kubelet[2219]: I0513 08:24:28.647296 2219 topology_manager.go:138] "Creating topology manager with none policy" May 13 08:24:28.647365 kubelet[2219]: I0513 08:24:28.647339 2219 container_manager_linux.go:301] "Creating device plugin manager" May 13 08:24:28.647481 kubelet[2219]: I0513 08:24:28.647450 2219 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:28.647766 kubelet[2219]: I0513 08:24:28.647743 2219 kubelet.go:400] "Attempting to sync node with API server" May 13 08:24:28.647817 kubelet[2219]: I0513 08:24:28.647784 2219 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 08:24:28.647851 kubelet[2219]: I0513 08:24:28.647830 2219 kubelet.go:312] "Adding apiserver pod source" May 13 08:24:28.647895 kubelet[2219]: I0513 08:24:28.647874 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 08:24:28.648844 kubelet[2219]: I0513 08:24:28.648826 2219 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 08:24:28.649103 kubelet[2219]: I0513 08:24:28.649089 2219 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 08:24:28.649726 kubelet[2219]: I0513 08:24:28.649705 2219 server.go:1264] "Started kubelet" May 13 08:24:28.659258 kubelet[2219]: I0513 08:24:28.659028 2219 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 08:24:28.660246 kubelet[2219]: I0513 08:24:28.660196 2219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 08:24:28.660348 kubelet[2219]: I0513 08:24:28.660324 2219 server.go:455] "Adding debug handlers to kubelet server" May 13 08:24:28.660660 kubelet[2219]: I0513 08:24:28.660646 2219 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 08:24:28.673174 kernel: audit: type=1400 audit(1747124668.661:233): avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:28.673284 kernel: audit: type=1401 audit(1747124668.661:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:28.661000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:28.661000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.662976 2219 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.663048 2219 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.663091 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.670082 2219 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.670180 2219 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 08:24:28.673391 kubelet[2219]: I0513 08:24:28.670356 2219 reconciler.go:26] "Reconciler: start to sync state" May 13 08:24:28.675188 kubelet[2219]: I0513 08:24:28.673725 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 08:24:28.675544 kubelet[2219]: I0513 08:24:28.675513 2219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 08:24:28.675611 kubelet[2219]: I0513 08:24:28.675551 2219 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 08:24:28.675611 kubelet[2219]: I0513 08:24:28.675569 2219 kubelet.go:2337] "Starting kubelet main sync loop" May 13 08:24:28.675696 kubelet[2219]: E0513 08:24:28.675672 2219 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 08:24:28.684730 kernel: audit: type=1300 audit(1747124668.661:233): arch=c000003e syscall=188 success=no exit=-22 a0=c0009f7b00 a1=c00090b038 a2=c0009f7ad0 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:28.684793 kernel: audit: type=1327 audit(1747124668.661:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:28.661000 audit[2219]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009f7b00 a1=c00090b038 a2=c0009f7ad0 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:28.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:28.684926 kubelet[2219]: I0513 08:24:28.681612 2219 factory.go:221] Registration of the systemd container factory successfully May 13 08:24:28.684926 kubelet[2219]: I0513 08:24:28.681674 2219 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 08:24:28.686336 kubelet[2219]: I0513 08:24:28.686323 2219 factory.go:221] Registration of the containerd container factory successfully May 13 08:24:28.661000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:28.692700 kubelet[2219]: E0513 08:24:28.692682 2219 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 08:24:28.697603 kernel: audit: type=1400 audit(1747124668.661:234): avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:28.661000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:28.702601 kernel: audit: type=1401 audit(1747124668.661:234): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:28.661000 audit[2219]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000baed20 a1=c00090b050 a2=c0009f7b90 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:28.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:28.716544 kernel: audit: type=1300 audit(1747124668.661:234): arch=c000003e syscall=188 success=no exit=-22 a0=c000baed20 a1=c00090b050 a2=c0009f7b90 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:28.716638 kernel: audit: type=1327 audit(1747124668.661:234): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:28.750889 kubelet[2219]: I0513 08:24:28.750808 2219 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 08:24:28.751026 kubelet[2219]: I0513 08:24:28.751014 2219 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 08:24:28.751366 kubelet[2219]: I0513 08:24:28.751332 2219 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:28.752146 kubelet[2219]: I0513 08:24:28.752132 2219 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 08:24:28.752413 kubelet[2219]: I0513 08:24:28.752386 2219 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 08:24:28.752656 kubelet[2219]: I0513 08:24:28.752561 2219 policy_none.go:49] "None policy: Start" May 13 08:24:28.754023 kubelet[2219]: I0513 08:24:28.754001 2219 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 08:24:28.754095 kubelet[2219]: I0513 08:24:28.754028 2219 state_mem.go:35] "Initializing new in-memory state store" May 13 08:24:28.754201 kubelet[2219]: I0513 08:24:28.754173 2219 state_mem.go:75] "Updated machine memory state" May 13 08:24:28.755405 kubelet[2219]: I0513 08:24:28.755380 2219 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 08:24:28.753000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:24:28.753000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 13 08:24:28.753000 audit[2219]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fcd9e0 a1=c000fd65b8 a2=c000fcd9b0 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:28.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 13 08:24:28.757833 kubelet[2219]: I0513 08:24:28.757729 2219 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 13 08:24:28.757916 kubelet[2219]: I0513 08:24:28.757880 2219 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 08:24:28.759169 kubelet[2219]: I0513 08:24:28.759051 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 08:24:28.775925 kubelet[2219]: I0513 08:24:28.775883 2219 topology_manager.go:215] "Topology Admit Handler" podUID="daa9a0cbd46f6464cf77bbe76398ab3b" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.776231 kubelet[2219]: I0513 08:24:28.776208 2219 topology_manager.go:215] "Topology Admit Handler" podUID="5f9a3d797445c4197125a3d1644b119b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.776401 kubelet[2219]: I0513 08:24:28.776382 2219 topology_manager.go:215] "Topology Admit Handler" podUID="91606a82f44dda6eecf294865a36aca8" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.790371 kubelet[2219]: W0513 08:24:28.790142 2219 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:28.793816 kubelet[2219]: W0513 08:24:28.793788 2219 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:28.794354 kubelet[2219]: W0513 08:24:28.794342 2219 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:28.868746 kubelet[2219]: I0513 08:24:28.868495 2219 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.871135 kubelet[2219]: I0513 08:24:28.870988 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.871302 kubelet[2219]: I0513 08:24:28.871284 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.871429 kubelet[2219]: I0513 08:24:28.871411 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daa9a0cbd46f6464cf77bbe76398ab3b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"daa9a0cbd46f6464cf77bbe76398ab3b\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.885858 kubelet[2219]: I0513 08:24:28.885711 2219 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.885983 kubelet[2219]: I0513 08:24:28.885896 2219 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.973986 kubelet[2219]: I0513 08:24:28.972347 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.973986 kubelet[2219]: I0513 08:24:28.972609 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91606a82f44dda6eecf294865a36aca8-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"91606a82f44dda6eecf294865a36aca8\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.973986 kubelet[2219]: I0513 08:24:28.972789 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.973986 kubelet[2219]: I0513 08:24:28.972884 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.974364 kubelet[2219]: I0513 08:24:28.972976 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:28.974364 kubelet[2219]: I0513 08:24:28.973120 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f9a3d797445c4197125a3d1644b119b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" (UID: \"5f9a3d797445c4197125a3d1644b119b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:29.659251 kubelet[2219]: I0513 08:24:29.659215 2219 apiserver.go:52] "Watching apiserver" May 13 08:24:29.671269 kubelet[2219]: I0513 08:24:29.671232 2219 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 08:24:29.741701 kubelet[2219]: W0513 08:24:29.739856 2219 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:29.741701 kubelet[2219]: E0513 08:24:29.740025 2219 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:29.741701 kubelet[2219]: W0513 08:24:29.740685 2219 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:29.741701 kubelet[2219]: E0513 08:24:29.740722 2219 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:24:29.765956 kubelet[2219]: I0513 08:24:29.765790 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-n-f896a7891b.novalocal" podStartSLOduration=1.765772239 podStartE2EDuration="1.765772239s" podCreationTimestamp="2025-05-13 08:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:29.765711207 +0000 UTC m=+1.274231706" watchObservedRunningTime="2025-05-13 08:24:29.765772239 +0000 UTC m=+1.274292728" May 13 08:24:29.766146 kubelet[2219]: I0513 08:24:29.766036 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-f896a7891b.novalocal" podStartSLOduration=1.766027298 podStartE2EDuration="1.766027298s" podCreationTimestamp="2025-05-13 08:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:29.750800945 +0000 UTC m=+1.259321425" watchObservedRunningTime="2025-05-13 08:24:29.766027298 +0000 UTC m=+1.274547777" May 13 08:24:29.787976 kubelet[2219]: I0513 08:24:29.786830 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-n-f896a7891b.novalocal" podStartSLOduration=1.786811439 podStartE2EDuration="1.786811439s" podCreationTimestamp="2025-05-13 08:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:29.775295362 +0000 UTC m=+1.283815851" watchObservedRunningTime="2025-05-13 08:24:29.786811439 +0000 UTC m=+1.295331918" May 13 08:24:34.817212 sudo[1475]: pam_unix(sudo:session): session closed for user root May 13 08:24:34.818000 audit[1475]: USER_END pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:24:34.820188 kernel: kauditd_printk_skb: 4 callbacks suppressed May 13 08:24:34.820284 kernel: audit: type=1106 audit(1747124674.818:236): pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:24:34.818000 audit[1475]: CRED_DISP pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:24:34.831724 kernel: audit: type=1104 audit(1747124674.818:237): pid=1475 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 13 08:24:35.064961 sshd[1469]: pam_unix(sshd:session): session closed for user core May 13 08:24:35.067000 audit[1469]: USER_END pid=1469 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:24:35.085971 kernel: audit: type=1106 audit(1747124675.067:238): pid=1469 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:24:35.085666 systemd[1]: sshd@8-172.24.4.25:22-172.24.4.1:46768.service: Deactivated successfully. May 13 08:24:35.068000 audit[1469]: CRED_DISP pid=1469 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:24:35.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.25:22-172.24.4.1:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:35.102202 systemd[1]: session-9.scope: Deactivated successfully. May 13 08:24:35.114916 kernel: audit: type=1104 audit(1747124675.068:239): pid=1469 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:24:35.115043 kernel: audit: type=1131 audit(1747124675.085:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.25:22-172.24.4.1:46768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:24:35.114992 systemd-logind[1240]: Session 9 logged out. Waiting for processes to exit. May 13 08:24:35.117839 systemd-logind[1240]: Removed session 9. May 13 08:24:42.998500 kubelet[2219]: I0513 08:24:42.998468 2219 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 08:24:42.998940 env[1260]: time="2025-05-13T08:24:42.998879770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 08:24:42.999157 kubelet[2219]: I0513 08:24:42.999067 2219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 08:24:43.662354 kubelet[2219]: I0513 08:24:43.662279 2219 topology_manager.go:215] "Topology Admit Handler" podUID="881d897c-3493-4bc0-aa48-e4c43cb9d7d5" podNamespace="kube-system" podName="kube-proxy-7d4zl" May 13 08:24:43.779571 kubelet[2219]: I0513 08:24:43.779435 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/881d897c-3493-4bc0-aa48-e4c43cb9d7d5-kube-proxy\") pod \"kube-proxy-7d4zl\" (UID: \"881d897c-3493-4bc0-aa48-e4c43cb9d7d5\") " pod="kube-system/kube-proxy-7d4zl" May 13 08:24:43.779571 kubelet[2219]: I0513 08:24:43.779493 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/881d897c-3493-4bc0-aa48-e4c43cb9d7d5-xtables-lock\") pod \"kube-proxy-7d4zl\" (UID: \"881d897c-3493-4bc0-aa48-e4c43cb9d7d5\") " pod="kube-system/kube-proxy-7d4zl" May 13 08:24:43.779571 kubelet[2219]: I0513 08:24:43.779526 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/881d897c-3493-4bc0-aa48-e4c43cb9d7d5-lib-modules\") pod \"kube-proxy-7d4zl\" (UID: \"881d897c-3493-4bc0-aa48-e4c43cb9d7d5\") " pod="kube-system/kube-proxy-7d4zl" May 13 08:24:43.779571 kubelet[2219]: I0513 08:24:43.779589 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdtcc\" (UniqueName: \"kubernetes.io/projected/881d897c-3493-4bc0-aa48-e4c43cb9d7d5-kube-api-access-mdtcc\") pod \"kube-proxy-7d4zl\" (UID: \"881d897c-3493-4bc0-aa48-e4c43cb9d7d5\") " pod="kube-system/kube-proxy-7d4zl" May 13 08:24:43.973440 env[1260]: time="2025-05-13T08:24:43.972320409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d4zl,Uid:881d897c-3493-4bc0-aa48-e4c43cb9d7d5,Namespace:kube-system,Attempt:0,}" May 13 08:24:44.029921 env[1260]: time="2025-05-13T08:24:44.029387371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:44.029921 env[1260]: time="2025-05-13T08:24:44.029461406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:44.029921 env[1260]: time="2025-05-13T08:24:44.029486535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:44.029921 env[1260]: time="2025-05-13T08:24:44.029717919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4625811bff1efb575fced8fcc618f329e8fcc653e66f8dbc209bf1bf339a8f45 pid=2300 runtime=io.containerd.runc.v2 May 13 08:24:44.032932 kubelet[2219]: I0513 08:24:44.032894 2219 topology_manager.go:215] "Topology Admit Handler" podUID="91c94662-2b2b-4d14-afcb-7e56feb9ee65" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-9w2lk" May 13 08:24:44.083093 kubelet[2219]: I0513 08:24:44.083056 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91c94662-2b2b-4d14-afcb-7e56feb9ee65-var-lib-calico\") pod \"tigera-operator-797db67f8-9w2lk\" (UID: \"91c94662-2b2b-4d14-afcb-7e56feb9ee65\") " pod="tigera-operator/tigera-operator-797db67f8-9w2lk" May 13 08:24:44.083261 kubelet[2219]: I0513 08:24:44.083140 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j964p\" (UniqueName: \"kubernetes.io/projected/91c94662-2b2b-4d14-afcb-7e56feb9ee65-kube-api-access-j964p\") pod \"tigera-operator-797db67f8-9w2lk\" (UID: \"91c94662-2b2b-4d14-afcb-7e56feb9ee65\") " pod="tigera-operator/tigera-operator-797db67f8-9w2lk" May 13 08:24:44.100709 env[1260]: time="2025-05-13T08:24:44.100650017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d4zl,Uid:881d897c-3493-4bc0-aa48-e4c43cb9d7d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4625811bff1efb575fced8fcc618f329e8fcc653e66f8dbc209bf1bf339a8f45\"" May 13 08:24:44.104454 env[1260]: time="2025-05-13T08:24:44.103767271Z" level=info msg="CreateContainer within sandbox \"4625811bff1efb575fced8fcc618f329e8fcc653e66f8dbc209bf1bf339a8f45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 08:24:44.126106 env[1260]: time="2025-05-13T08:24:44.126063089Z" level=info msg="CreateContainer within sandbox \"4625811bff1efb575fced8fcc618f329e8fcc653e66f8dbc209bf1bf339a8f45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec108a46daaa4e6fa2d6af23a8ffc72dfa78e91661ca9bb406fe93e404cee867\"" May 13 08:24:44.128070 env[1260]: time="2025-05-13T08:24:44.128039916Z" level=info msg="StartContainer for \"ec108a46daaa4e6fa2d6af23a8ffc72dfa78e91661ca9bb406fe93e404cee867\"" May 13 08:24:44.199274 env[1260]: time="2025-05-13T08:24:44.199224930Z" level=info msg="StartContainer for \"ec108a46daaa4e6fa2d6af23a8ffc72dfa78e91661ca9bb406fe93e404cee867\" returns successfully" May 13 08:24:44.271000 audit[2397]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.271000 audit[2396]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.280183 kernel: audit: type=1325 audit(1747124684.271:241): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.280359 kernel: audit: type=1325 audit(1747124684.271:242): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.271000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf7fc1a40 a2=0 a3=7ffdf7fc1a2c items=0 ppid=2353 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 08:24:44.292174 kernel: audit: type=1300 audit(1747124684.271:242): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf7fc1a40 a2=0 a3=7ffdf7fc1a2c items=0 ppid=2353 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.292296 kernel: audit: type=1327 audit(1747124684.271:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 08:24:44.271000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed54f8c40 a2=0 a3=7ffed54f8c2c items=0 ppid=2353 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.299856 kernel: audit: type=1300 audit(1747124684.271:241): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed54f8c40 a2=0 a3=7ffed54f8c2c items=0 ppid=2353 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 08:24:44.303858 kernel: audit: type=1327 audit(1747124684.271:241): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 13 08:24:44.303950 kernel: audit: type=1325 audit(1747124684.275:243): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.275000 audit[2398]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.275000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdd3f1c50 a2=0 a3=7ffcdd3f1c3c items=0 ppid=2353 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.315064 kernel: audit: type=1300 audit(1747124684.275:243): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdd3f1c50 a2=0 a3=7ffcdd3f1c3c items=0 ppid=2353 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 08:24:44.318883 kernel: audit: type=1327 audit(1747124684.275:243): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 08:24:44.275000 audit[2399]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.326626 kernel: audit: type=1325 audit(1747124684.275:244): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.275000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3a572ec0 a2=0 a3=7ffe3a572eac items=0 ppid=2353 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 13 08:24:44.287000 audit[2400]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.287000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff31fe12d0 a2=0 a3=7fff31fe12bc items=0 ppid=2353 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 13 08:24:44.291000 audit[2401]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.291000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0a55f450 a2=0 a3=7ffc0a55f43c items=0 ppid=2353 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 13 08:24:44.335743 env[1260]: time="2025-05-13T08:24:44.335701178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-9w2lk,Uid:91c94662-2b2b-4d14-afcb-7e56feb9ee65,Namespace:tigera-operator,Attempt:0,}" May 13 08:24:44.355028 env[1260]: time="2025-05-13T08:24:44.354960100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:44.355219 env[1260]: time="2025-05-13T08:24:44.355189931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:44.355332 env[1260]: time="2025-05-13T08:24:44.355305418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:44.355851 env[1260]: time="2025-05-13T08:24:44.355617871Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f21420dc56f1c30195220a948b6c0bbb78b6259fa3ce9f0d66ee74fba599f2d pid=2410 runtime=io.containerd.runc.v2 May 13 08:24:44.379000 audit[2435]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.379000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe0929f1f0 a2=0 a3=7ffe0929f1dc items=0 ppid=2353 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 13 08:24:44.383000 audit[2437]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.383000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffddb7d77f0 a2=0 a3=7ffddb7d77dc items=0 ppid=2353 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 13 08:24:44.392000 audit[2440]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.392000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5b5dc020 a2=0 a3=7ffd5b5dc00c items=0 ppid=2353 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 13 08:24:44.395000 audit[2441]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.395000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5a179170 a2=0 a3=7ffc5a17915c items=0 ppid=2353 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 13 08:24:44.402000 audit[2443]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.402000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2fd1af90 a2=0 a3=7ffe2fd1af7c items=0 ppid=2353 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 13 08:24:44.404000 audit[2445]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.404000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc71a09b10 a2=0 a3=7ffc71a09afc items=0 ppid=2353 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 13 08:24:44.410000 audit[2455]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.410000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe5e115890 a2=0 a3=7ffe5e11587c items=0 ppid=2353 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 13 08:24:44.418284 env[1260]: time="2025-05-13T08:24:44.418248166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-9w2lk,Uid:91c94662-2b2b-4d14-afcb-7e56feb9ee65,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7f21420dc56f1c30195220a948b6c0bbb78b6259fa3ce9f0d66ee74fba599f2d\"" May 13 08:24:44.419000 audit[2458]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.419000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed5ea4aa0 a2=0 a3=7ffed5ea4a8c items=0 ppid=2353 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 13 08:24:44.421000 audit[2459]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.421923 env[1260]: time="2025-05-13T08:24:44.421896472Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 08:24:44.421000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff57d1efa0 a2=0 a3=7fff57d1ef8c items=0 ppid=2353 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 13 08:24:44.424000 audit[2461]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.424000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd96361940 a2=0 a3=7ffd9636192c items=0 ppid=2353 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 13 08:24:44.426000 audit[2462]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.426000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9dc65e80 a2=0 a3=7ffc9dc65e6c items=0 ppid=2353 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 13 08:24:44.429000 audit[2464]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.429000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3d1f68b0 a2=0 a3=7ffe3d1f689c items=0 ppid=2353 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 08:24:44.432000 audit[2467]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.432000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd41de9950 a2=0 a3=7ffd41de993c items=0 ppid=2353 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 08:24:44.437000 audit[2470]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.437000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9055d5b0 a2=0 a3=7ffe9055d59c items=0 ppid=2353 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 13 08:24:44.438000 audit[2471]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.438000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd51acc810 a2=0 a3=7ffd51acc7fc items=0 ppid=2353 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.438000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 13 08:24:44.440000 audit[2473]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.440000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffed0258a50 a2=0 a3=7ffed0258a3c items=0 ppid=2353 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 08:24:44.443000 audit[2476]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.443000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9b8adec0 a2=0 a3=7ffc9b8adeac items=0 ppid=2353 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 08:24:44.445000 audit[2477]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.445000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff99de5cc0 a2=0 a3=7fff99de5cac items=0 ppid=2353 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 13 08:24:44.447000 audit[2479]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 13 08:24:44.447000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffda3e91e00 a2=0 a3=7ffda3e91dec items=0 ppid=2353 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 13 08:24:44.476000 audit[2485]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:44.476000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcaa8c8510 a2=0 a3=7ffcaa8c84fc items=0 ppid=2353 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:44.485000 audit[2485]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:44.485000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcaa8c8510 a2=0 a3=7ffcaa8c84fc items=0 ppid=2353 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:44.487000 audit[2490]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.487000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc72a5a0f0 a2=0 a3=7ffc72a5a0dc items=0 ppid=2353 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 13 08:24:44.490000 audit[2492]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.490000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc6ce8fb40 a2=0 a3=7ffc6ce8fb2c items=0 ppid=2353 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 13 08:24:44.493000 audit[2495]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.493000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe78a82870 a2=0 a3=7ffe78a8285c items=0 ppid=2353 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 13 08:24:44.494000 audit[2496]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.494000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9161fbe0 a2=0 a3=7fff9161fbcc items=0 ppid=2353 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 13 08:24:44.497000 audit[2498]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.497000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2a3292e0 a2=0 a3=7ffd2a3292cc items=0 ppid=2353 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 13 08:24:44.498000 audit[2499]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.498000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe85c6d720 a2=0 a3=7ffe85c6d70c items=0 ppid=2353 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 13 08:24:44.500000 audit[2501]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.500000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc27621e70 a2=0 a3=7ffc27621e5c items=0 ppid=2353 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 13 08:24:44.504000 audit[2504]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.504000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd35598070 a2=0 a3=7ffd3559805c items=0 ppid=2353 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 13 08:24:44.505000 audit[2505]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.505000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea1e85630 a2=0 a3=7ffea1e8561c items=0 ppid=2353 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 13 08:24:44.507000 audit[2507]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.507000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8add55d0 a2=0 a3=7fff8add55bc items=0 ppid=2353 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 13 08:24:44.508000 audit[2508]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.508000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff3c1a710 a2=0 a3=7ffff3c1a6fc items=0 ppid=2353 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 13 08:24:44.511000 audit[2510]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.511000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6ad26410 a2=0 a3=7ffd6ad263fc items=0 ppid=2353 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 13 08:24:44.515000 audit[2513]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.515000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0e07cde0 a2=0 a3=7ffc0e07cdcc items=0 ppid=2353 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 13 08:24:44.519000 audit[2516]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.519000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc39edfc0 a2=0 a3=7ffcc39edfac items=0 ppid=2353 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 13 08:24:44.520000 audit[2517]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.520000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd79648d50 a2=0 a3=7ffd79648d3c items=0 ppid=2353 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 13 08:24:44.522000 audit[2519]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.522000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe6b1b4ca0 a2=0 a3=7ffe6b1b4c8c items=0 ppid=2353 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 08:24:44.526000 audit[2522]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.526000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdb1ff9800 a2=0 a3=7ffdb1ff97ec items=0 ppid=2353 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.526000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 13 08:24:44.530000 audit[2523]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.530000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b1d4980 a2=0 a3=7ffd4b1d496c items=0 ppid=2353 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 13 08:24:44.533000 audit[2525]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.533000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff123b41c0 a2=0 a3=7fff123b41ac items=0 ppid=2353 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 13 08:24:44.534000 audit[2526]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.534000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff85d08120 a2=0 a3=7fff85d0810c items=0 ppid=2353 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 13 08:24:44.537000 audit[2528]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.537000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8b982100 a2=0 a3=7ffc8b9820ec items=0 ppid=2353 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 08:24:44.540000 audit[2531]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 13 08:24:44.540000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc2773b90 a2=0 a3=7ffdc2773b7c items=0 ppid=2353 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 13 08:24:44.543000 audit[2533]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 13 08:24:44.543000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc2f12b6a0 a2=0 a3=7ffc2f12b68c items=0 ppid=2353 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.543000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:44.544000 audit[2533]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 13 08:24:44.544000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc2f12b6a0 a2=0 a3=7ffc2f12b68c items=0 ppid=2353 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:44.544000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:44.903322 systemd[1]: run-containerd-runc-k8s.io-4625811bff1efb575fced8fcc618f329e8fcc653e66f8dbc209bf1bf339a8f45-runc.rmE1uA.mount: Deactivated successfully. May 13 08:24:47.016905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092127279.mount: Deactivated successfully. May 13 08:24:48.589151 env[1260]: time="2025-05-13T08:24:48.589033622Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:48.591869 env[1260]: time="2025-05-13T08:24:48.591837698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:48.593763 env[1260]: time="2025-05-13T08:24:48.593739169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:48.595502 env[1260]: time="2025-05-13T08:24:48.595471199Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:48.596153 env[1260]: time="2025-05-13T08:24:48.596128746Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 08:24:48.600125 env[1260]: time="2025-05-13T08:24:48.600090689Z" level=info msg="CreateContainer within sandbox \"7f21420dc56f1c30195220a948b6c0bbb78b6259fa3ce9f0d66ee74fba599f2d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 08:24:48.614462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206192410.mount: Deactivated successfully. May 13 08:24:48.622325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089683919.mount: Deactivated successfully. May 13 08:24:48.639028 env[1260]: time="2025-05-13T08:24:48.638988784Z" level=info msg="CreateContainer within sandbox \"7f21420dc56f1c30195220a948b6c0bbb78b6259fa3ce9f0d66ee74fba599f2d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"75e4dfc934b52ec565e46f20932c1df5e0ff69ddb635c4d489067a0ac3a6deac\"" May 13 08:24:48.640805 env[1260]: time="2025-05-13T08:24:48.639611413Z" level=info msg="StartContainer for \"75e4dfc934b52ec565e46f20932c1df5e0ff69ddb635c4d489067a0ac3a6deac\"" May 13 08:24:48.705410 kubelet[2219]: I0513 08:24:48.702786 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7d4zl" podStartSLOduration=5.702754973 podStartE2EDuration="5.702754973s" podCreationTimestamp="2025-05-13 08:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:44.78652453 +0000 UTC m=+16.295045059" watchObservedRunningTime="2025-05-13 08:24:48.702754973 +0000 UTC m=+20.211275462" May 13 08:24:48.737984 env[1260]: time="2025-05-13T08:24:48.737858724Z" level=info msg="StartContainer for \"75e4dfc934b52ec565e46f20932c1df5e0ff69ddb635c4d489067a0ac3a6deac\" returns successfully" May 13 08:24:52.257625 kernel: kauditd_printk_skb: 143 callbacks suppressed May 13 08:24:52.257769 kernel: audit: type=1325 audit(1747124692.254:292): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.254000 audit[2574]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.254000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd41a027a0 a2=0 a3=7ffd41a0278c items=0 ppid=2353 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.269013 kernel: audit: type=1300 audit(1747124692.254:292): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd41a027a0 a2=0 a3=7ffd41a0278c items=0 ppid=2353 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.269070 kernel: audit: type=1327 audit(1747124692.254:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.271000 audit[2574]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.277307 kernel: audit: type=1325 audit(1747124692.271:293): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.277376 kernel: audit: type=1300 audit(1747124692.271:293): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd41a027a0 a2=0 a3=0 items=0 ppid=2353 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.271000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd41a027a0 a2=0 a3=0 items=0 ppid=2353 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.288683 kernel: audit: type=1327 audit(1747124692.271:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.292000 audit[2576]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.292000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd43b782d0 a2=0 a3=7ffd43b782bc items=0 ppid=2353 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.305324 kernel: audit: type=1325 audit(1747124692.292:294): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.305397 kernel: audit: type=1300 audit(1747124692.292:294): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd43b782d0 a2=0 a3=7ffd43b782bc items=0 ppid=2353 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.305426 kernel: audit: type=1327 audit(1747124692.292:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.309000 audit[2576]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.309000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd43b782d0 a2=0 a3=0 items=0 ppid=2353 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:52.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:52.315596 kernel: audit: type=1325 audit(1747124692.309:295): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:52.650156 kubelet[2219]: I0513 08:24:52.650024 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-9w2lk" podStartSLOduration=5.472278955 podStartE2EDuration="9.649960048s" podCreationTimestamp="2025-05-13 08:24:43 +0000 UTC" firstStartedPulling="2025-05-13 08:24:44.419665646 +0000 UTC m=+15.928186125" lastFinishedPulling="2025-05-13 08:24:48.597346739 +0000 UTC m=+20.105867218" observedRunningTime="2025-05-13 08:24:48.798151976 +0000 UTC m=+20.306672455" watchObservedRunningTime="2025-05-13 08:24:52.649960048 +0000 UTC m=+24.158480578" May 13 08:24:52.651641 kubelet[2219]: I0513 08:24:52.651561 2219 topology_manager.go:215] "Topology Admit Handler" podUID="8dbb72fc-3576-4fb0-a420-2012a4770e14" podNamespace="calico-system" podName="calico-typha-66f6fc9456-twd8c" May 13 08:24:52.772029 kubelet[2219]: I0513 08:24:52.771989 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbb72fc-3576-4fb0-a420-2012a4770e14-tigera-ca-bundle\") pod \"calico-typha-66f6fc9456-twd8c\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " pod="calico-system/calico-typha-66f6fc9456-twd8c" May 13 08:24:52.772160 kubelet[2219]: I0513 08:24:52.772035 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8dbb72fc-3576-4fb0-a420-2012a4770e14-typha-certs\") pod \"calico-typha-66f6fc9456-twd8c\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " pod="calico-system/calico-typha-66f6fc9456-twd8c" May 13 08:24:52.772160 kubelet[2219]: I0513 08:24:52.772063 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcc2n\" (UniqueName: \"kubernetes.io/projected/8dbb72fc-3576-4fb0-a420-2012a4770e14-kube-api-access-rcc2n\") pod \"calico-typha-66f6fc9456-twd8c\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " pod="calico-system/calico-typha-66f6fc9456-twd8c" May 13 08:24:52.851615 kubelet[2219]: I0513 08:24:52.851544 2219 topology_manager.go:215] "Topology Admit Handler" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" podNamespace="calico-system" podName="calico-node-htsgx" May 13 08:24:52.872265 kubelet[2219]: I0513 08:24:52.872215 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8abb2ca2-052d-4f35-a41c-c4db1f01016e-node-certs\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872265 kubelet[2219]: I0513 08:24:52.872259 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-lib-calico\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872447 kubelet[2219]: I0513 08:24:52.872281 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-bin-dir\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872447 kubelet[2219]: I0513 08:24:52.872337 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-lib-modules\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872447 kubelet[2219]: I0513 08:24:52.872367 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-policysync\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872447 kubelet[2219]: I0513 08:24:52.872386 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-net-dir\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872447 kubelet[2219]: I0513 08:24:52.872405 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fbp\" (UniqueName: \"kubernetes.io/projected/8abb2ca2-052d-4f35-a41c-c4db1f01016e-kube-api-access-k8fbp\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872618 kubelet[2219]: I0513 08:24:52.872425 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8abb2ca2-052d-4f35-a41c-c4db1f01016e-tigera-ca-bundle\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872618 kubelet[2219]: I0513 08:24:52.872443 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-log-dir\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872618 kubelet[2219]: I0513 08:24:52.872475 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-run-calico\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872618 kubelet[2219]: I0513 08:24:52.872497 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-xtables-lock\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.872618 kubelet[2219]: I0513 08:24:52.872516 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-flexvol-driver-host\") pod \"calico-node-htsgx\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " pod="calico-system/calico-node-htsgx" May 13 08:24:52.965510 env[1260]: time="2025-05-13T08:24:52.964939342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f6fc9456-twd8c,Uid:8dbb72fc-3576-4fb0-a420-2012a4770e14,Namespace:calico-system,Attempt:0,}" May 13 08:24:52.985598 kubelet[2219]: E0513 08:24:52.985556 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:52.985908 kubelet[2219]: W0513 08:24:52.985868 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:52.986150 kubelet[2219]: E0513 08:24:52.986115 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.009237 kubelet[2219]: E0513 08:24:53.009212 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.009409 kubelet[2219]: W0513 08:24:53.009392 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.009506 kubelet[2219]: E0513 08:24:53.009491 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.014165 kubelet[2219]: I0513 08:24:53.014133 2219 topology_manager.go:215] "Topology Admit Handler" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" podNamespace="calico-system" podName="csi-node-driver-zls78" May 13 08:24:53.014652 kubelet[2219]: E0513 08:24:53.014632 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:24:53.041916 env[1260]: time="2025-05-13T08:24:53.041846579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:53.042528 kubelet[2219]: E0513 08:24:53.042514 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.042638 kubelet[2219]: W0513 08:24:53.042620 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.042748 kubelet[2219]: E0513 08:24:53.042730 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.042917 env[1260]: time="2025-05-13T08:24:53.042886250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:53.045766 env[1260]: time="2025-05-13T08:24:53.045713039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:53.046088 env[1260]: time="2025-05-13T08:24:53.046057893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288 pid=2596 runtime=io.containerd.runc.v2 May 13 08:24:53.074270 kubelet[2219]: E0513 08:24:53.074145 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.074270 kubelet[2219]: W0513 08:24:53.074167 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.074270 kubelet[2219]: E0513 08:24:53.074186 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.074628 kubelet[2219]: E0513 08:24:53.074515 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.074628 kubelet[2219]: W0513 08:24:53.074526 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.074628 kubelet[2219]: E0513 08:24:53.074538 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.074879 kubelet[2219]: E0513 08:24:53.074780 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.074879 kubelet[2219]: W0513 08:24:53.074790 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.074879 kubelet[2219]: E0513 08:24:53.074801 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.075132 kubelet[2219]: E0513 08:24:53.075031 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.075132 kubelet[2219]: W0513 08:24:53.075041 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.075132 kubelet[2219]: E0513 08:24:53.075052 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.078992 kubelet[2219]: E0513 08:24:53.078854 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.078992 kubelet[2219]: W0513 08:24:53.078868 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.078992 kubelet[2219]: E0513 08:24:53.078884 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.079279 kubelet[2219]: E0513 08:24:53.079190 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.079279 kubelet[2219]: W0513 08:24:53.079200 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.079279 kubelet[2219]: E0513 08:24:53.079211 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.080678 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.083772 kubelet[2219]: W0513 08:24:53.080691 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.080702 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.080897 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.083772 kubelet[2219]: W0513 08:24:53.080907 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.080916 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.081061 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.083772 kubelet[2219]: W0513 08:24:53.081071 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.081081 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.083772 kubelet[2219]: E0513 08:24:53.081208 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.084127 kubelet[2219]: W0513 08:24:53.081216 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.084127 kubelet[2219]: E0513 08:24:53.081225 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.084127 kubelet[2219]: E0513 08:24:53.083678 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.084127 kubelet[2219]: W0513 08:24:53.083689 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.084127 kubelet[2219]: E0513 08:24:53.083701 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.084480 kubelet[2219]: E0513 08:24:53.084357 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.084480 kubelet[2219]: W0513 08:24:53.084372 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.084480 kubelet[2219]: E0513 08:24:53.084383 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.084760 kubelet[2219]: E0513 08:24:53.084672 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.084760 kubelet[2219]: W0513 08:24:53.084683 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.084760 kubelet[2219]: E0513 08:24:53.084693 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.085746 kubelet[2219]: E0513 08:24:53.085662 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.085746 kubelet[2219]: W0513 08:24:53.085673 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.085746 kubelet[2219]: E0513 08:24:53.085683 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.085896 kubelet[2219]: E0513 08:24:53.085886 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.085956 kubelet[2219]: W0513 08:24:53.085946 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.089606 kubelet[2219]: E0513 08:24:53.086004 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.090421 kubelet[2219]: E0513 08:24:53.090407 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.090502 kubelet[2219]: W0513 08:24:53.090490 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.090613 kubelet[2219]: E0513 08:24:53.090568 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.091349 kubelet[2219]: E0513 08:24:53.091339 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.091422 kubelet[2219]: W0513 08:24:53.091412 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.091489 kubelet[2219]: E0513 08:24:53.091476 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.095068 kubelet[2219]: E0513 08:24:53.095048 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.095200 kubelet[2219]: W0513 08:24:53.095185 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.095288 kubelet[2219]: E0513 08:24:53.095274 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.095542 kubelet[2219]: E0513 08:24:53.095520 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.095628 kubelet[2219]: W0513 08:24:53.095615 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.095693 kubelet[2219]: E0513 08:24:53.095682 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.095954 kubelet[2219]: E0513 08:24:53.095943 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.096024 kubelet[2219]: W0513 08:24:53.096012 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.096091 kubelet[2219]: E0513 08:24:53.096079 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.156542 env[1260]: time="2025-05-13T08:24:53.156503059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-htsgx,Uid:8abb2ca2-052d-4f35-a41c-c4db1f01016e,Namespace:calico-system,Attempt:0,}" May 13 08:24:53.174765 kubelet[2219]: E0513 08:24:53.174736 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.174765 kubelet[2219]: W0513 08:24:53.174758 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.174765 kubelet[2219]: E0513 08:24:53.174776 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.174980 kubelet[2219]: I0513 08:24:53.174804 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a34ed8-4b6e-4268-a42b-192aa9ef609b-kubelet-dir\") pod \"csi-node-driver-zls78\" (UID: \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\") " pod="calico-system/csi-node-driver-zls78" May 13 08:24:53.174980 kubelet[2219]: E0513 08:24:53.174935 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.174980 kubelet[2219]: W0513 08:24:53.174945 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.174980 kubelet[2219]: E0513 08:24:53.174954 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.174980 kubelet[2219]: I0513 08:24:53.174969 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p8m4\" (UniqueName: \"kubernetes.io/projected/c3a34ed8-4b6e-4268-a42b-192aa9ef609b-kube-api-access-6p8m4\") pod \"csi-node-driver-zls78\" (UID: \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\") " pod="calico-system/csi-node-driver-zls78" May 13 08:24:53.175109 kubelet[2219]: E0513 08:24:53.175097 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175109 kubelet[2219]: W0513 08:24:53.175106 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.175159 kubelet[2219]: E0513 08:24:53.175115 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.175159 kubelet[2219]: I0513 08:24:53.175130 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3a34ed8-4b6e-4268-a42b-192aa9ef609b-varrun\") pod \"csi-node-driver-zls78\" (UID: \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\") " pod="calico-system/csi-node-driver-zls78" May 13 08:24:53.175272 kubelet[2219]: E0513 08:24:53.175253 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175272 kubelet[2219]: W0513 08:24:53.175268 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.175357 kubelet[2219]: E0513 08:24:53.175279 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.175357 kubelet[2219]: I0513 08:24:53.175295 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3a34ed8-4b6e-4268-a42b-192aa9ef609b-registration-dir\") pod \"csi-node-driver-zls78\" (UID: \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\") " pod="calico-system/csi-node-driver-zls78" May 13 08:24:53.175457 kubelet[2219]: E0513 08:24:53.175423 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175457 kubelet[2219]: W0513 08:24:53.175438 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.175457 kubelet[2219]: E0513 08:24:53.175453 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.175544 kubelet[2219]: I0513 08:24:53.175469 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3a34ed8-4b6e-4268-a42b-192aa9ef609b-socket-dir\") pod \"csi-node-driver-zls78\" (UID: \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\") " pod="calico-system/csi-node-driver-zls78" May 13 08:24:53.175745 kubelet[2219]: E0513 08:24:53.175637 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175745 kubelet[2219]: W0513 08:24:53.175647 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.175745 kubelet[2219]: E0513 08:24:53.175655 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.175829 kubelet[2219]: E0513 08:24:53.175794 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175829 kubelet[2219]: W0513 08:24:53.175802 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.175829 kubelet[2219]: E0513 08:24:53.175812 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.175969 kubelet[2219]: E0513 08:24:53.175950 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.175969 kubelet[2219]: W0513 08:24:53.175962 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176054 kubelet[2219]: E0513 08:24:53.175977 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.176159 kubelet[2219]: E0513 08:24:53.176095 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.176159 kubelet[2219]: W0513 08:24:53.176108 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176159 kubelet[2219]: E0513 08:24:53.176119 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.176267 kubelet[2219]: E0513 08:24:53.176248 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.176267 kubelet[2219]: W0513 08:24:53.176262 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176336 kubelet[2219]: E0513 08:24:53.176276 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.176418 kubelet[2219]: E0513 08:24:53.176400 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.176418 kubelet[2219]: W0513 08:24:53.176413 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176742 kubelet[2219]: E0513 08:24:53.176528 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.176742 kubelet[2219]: E0513 08:24:53.176543 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.176742 kubelet[2219]: W0513 08:24:53.176609 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176742 kubelet[2219]: E0513 08:24:53.176645 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.176971 kubelet[2219]: E0513 08:24:53.176920 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.176971 kubelet[2219]: W0513 08:24:53.176930 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.176971 kubelet[2219]: E0513 08:24:53.176947 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.177116 kubelet[2219]: E0513 08:24:53.177100 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.177116 kubelet[2219]: W0513 08:24:53.177113 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.177208 kubelet[2219]: E0513 08:24:53.177123 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.177405 kubelet[2219]: E0513 08:24:53.177388 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.177405 kubelet[2219]: W0513 08:24:53.177401 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.177483 kubelet[2219]: E0513 08:24:53.177411 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.186643 env[1260]: time="2025-05-13T08:24:53.185564024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:53.186643 env[1260]: time="2025-05-13T08:24:53.185629542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:53.186643 env[1260]: time="2025-05-13T08:24:53.185644141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:53.186643 env[1260]: time="2025-05-13T08:24:53.185789104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2 pid=2662 runtime=io.containerd.runc.v2 May 13 08:24:53.276401 kubelet[2219]: E0513 08:24:53.276308 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.276401 kubelet[2219]: W0513 08:24:53.276339 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.276401 kubelet[2219]: E0513 08:24:53.276358 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.280112 kubelet[2219]: E0513 08:24:53.279748 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.280112 kubelet[2219]: W0513 08:24:53.279763 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.280112 kubelet[2219]: E0513 08:24:53.279781 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.280112 kubelet[2219]: E0513 08:24:53.279969 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.280112 kubelet[2219]: W0513 08:24:53.279978 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.280112 kubelet[2219]: E0513 08:24:53.279987 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.280335 kubelet[2219]: E0513 08:24:53.280149 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.280335 kubelet[2219]: W0513 08:24:53.280158 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.280335 kubelet[2219]: E0513 08:24:53.280168 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.280335 kubelet[2219]: E0513 08:24:53.280321 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.280335 kubelet[2219]: W0513 08:24:53.280330 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.280471 kubelet[2219]: E0513 08:24:53.280339 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.280661 kubelet[2219]: E0513 08:24:53.280523 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.280661 kubelet[2219]: W0513 08:24:53.280537 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.280661 kubelet[2219]: E0513 08:24:53.280548 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.281259 kubelet[2219]: E0513 08:24:53.281129 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.281259 kubelet[2219]: W0513 08:24:53.281148 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.281259 kubelet[2219]: E0513 08:24:53.281174 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.281535 kubelet[2219]: E0513 08:24:53.281413 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.281535 kubelet[2219]: W0513 08:24:53.281424 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.281535 kubelet[2219]: E0513 08:24:53.281438 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.281813 kubelet[2219]: E0513 08:24:53.281720 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.281813 kubelet[2219]: W0513 08:24:53.281732 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.281813 kubelet[2219]: E0513 08:24:53.281791 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.282081 kubelet[2219]: E0513 08:24:53.281966 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.282081 kubelet[2219]: W0513 08:24:53.281976 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.282081 kubelet[2219]: E0513 08:24:53.282059 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.282324 kubelet[2219]: E0513 08:24:53.282231 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.282324 kubelet[2219]: W0513 08:24:53.282240 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.282324 kubelet[2219]: E0513 08:24:53.282271 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.282557 kubelet[2219]: E0513 08:24:53.282465 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.282557 kubelet[2219]: W0513 08:24:53.282475 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.282557 kubelet[2219]: E0513 08:24:53.282510 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.282847 kubelet[2219]: E0513 08:24:53.282733 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.282847 kubelet[2219]: W0513 08:24:53.282744 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.282847 kubelet[2219]: E0513 08:24:53.282772 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283050 kubelet[2219]: E0513 08:24:53.282994 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283050 kubelet[2219]: W0513 08:24:53.283004 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283050 kubelet[2219]: E0513 08:24:53.283021 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283644 kubelet[2219]: E0513 08:24:53.283159 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283644 kubelet[2219]: W0513 08:24:53.283185 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283644 kubelet[2219]: E0513 08:24:53.283201 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283644 kubelet[2219]: E0513 08:24:53.283341 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283644 kubelet[2219]: W0513 08:24:53.283349 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283644 kubelet[2219]: E0513 08:24:53.283358 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283809 kubelet[2219]: E0513 08:24:53.283652 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283809 kubelet[2219]: W0513 08:24:53.283662 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283809 kubelet[2219]: E0513 08:24:53.283670 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283809 kubelet[2219]: E0513 08:24:53.283795 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283809 kubelet[2219]: W0513 08:24:53.283803 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283934 kubelet[2219]: E0513 08:24:53.283812 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.283966 kubelet[2219]: E0513 08:24:53.283958 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.283993 kubelet[2219]: W0513 08:24:53.283968 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.283993 kubelet[2219]: E0513 08:24:53.283977 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284100 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285611 kubelet[2219]: W0513 08:24:53.284112 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284127 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284237 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285611 kubelet[2219]: W0513 08:24:53.284245 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284253 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284385 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285611 kubelet[2219]: W0513 08:24:53.284393 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284401 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285611 kubelet[2219]: E0513 08:24:53.284517 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285933 kubelet[2219]: W0513 08:24:53.284526 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285933 kubelet[2219]: E0513 08:24:53.284534 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285933 kubelet[2219]: E0513 08:24:53.284692 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285933 kubelet[2219]: W0513 08:24:53.284700 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285933 kubelet[2219]: E0513 08:24:53.284710 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.285933 kubelet[2219]: E0513 08:24:53.284844 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.285933 kubelet[2219]: W0513 08:24:53.284852 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.285933 kubelet[2219]: E0513 08:24:53.284862 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.329779 kubelet[2219]: E0513 08:24:53.329746 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:53.330847 kubelet[2219]: W0513 08:24:53.330832 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:53.333213 kubelet[2219]: E0513 08:24:53.333192 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:53.349000 audit[2722]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:53.349000 audit[2722]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffd61b04540 a2=0 a3=7ffd61b0452c items=0 ppid=2353 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:53.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:53.354000 audit[2722]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:24:53.354000 audit[2722]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd61b04540 a2=0 a3=0 items=0 ppid=2353 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:24:53.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:24:53.390819 env[1260]: time="2025-05-13T08:24:53.390758510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66f6fc9456-twd8c,Uid:8dbb72fc-3576-4fb0-a420-2012a4770e14,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\"" May 13 08:24:53.397959 env[1260]: time="2025-05-13T08:24:53.397883211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 08:24:53.422751 env[1260]: time="2025-05-13T08:24:53.422714888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-htsgx,Uid:8abb2ca2-052d-4f35-a41c-c4db1f01016e,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\"" May 13 08:24:54.677116 kubelet[2219]: E0513 08:24:54.676975 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:24:56.676819 kubelet[2219]: E0513 08:24:56.676779 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:24:57.715238 env[1260]: time="2025-05-13T08:24:57.715192993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:57.717358 env[1260]: time="2025-05-13T08:24:57.717333944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:57.719150 env[1260]: time="2025-05-13T08:24:57.719108998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:57.721255 env[1260]: time="2025-05-13T08:24:57.721220457Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:57.721544 env[1260]: time="2025-05-13T08:24:57.721501913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 08:24:57.723619 env[1260]: time="2025-05-13T08:24:57.723430754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 08:24:57.735999 env[1260]: time="2025-05-13T08:24:57.735956189Z" level=info msg="CreateContainer within sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 08:24:57.761662 env[1260]: time="2025-05-13T08:24:57.761619682Z" level=info msg="CreateContainer within sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\"" May 13 08:24:57.762436 env[1260]: time="2025-05-13T08:24:57.762411806Z" level=info msg="StartContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\"" May 13 08:24:57.864639 env[1260]: time="2025-05-13T08:24:57.864556550Z" level=info msg="StartContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" returns successfully" May 13 08:24:58.678556 kubelet[2219]: E0513 08:24:58.678486 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:24:58.737436 systemd[1]: run-containerd-runc-k8s.io-317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce-runc.Q7Mutx.mount: Deactivated successfully. May 13 08:24:58.869269 kubelet[2219]: E0513 08:24:58.869233 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.869269 kubelet[2219]: W0513 08:24:58.869255 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.869429 kubelet[2219]: E0513 08:24:58.869272 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.869682 kubelet[2219]: E0513 08:24:58.869656 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.869682 kubelet[2219]: W0513 08:24:58.869672 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.869682 kubelet[2219]: E0513 08:24:58.869682 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.869927 kubelet[2219]: E0513 08:24:58.869901 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.869927 kubelet[2219]: W0513 08:24:58.869917 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.869927 kubelet[2219]: E0513 08:24:58.869926 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.870148 kubelet[2219]: E0513 08:24:58.870131 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.870148 kubelet[2219]: W0513 08:24:58.870146 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.870225 kubelet[2219]: E0513 08:24:58.870156 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.870371 kubelet[2219]: E0513 08:24:58.870354 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.870371 kubelet[2219]: W0513 08:24:58.870368 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.870460 kubelet[2219]: E0513 08:24:58.870379 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.870630 kubelet[2219]: E0513 08:24:58.870613 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.870630 kubelet[2219]: W0513 08:24:58.870627 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.870703 kubelet[2219]: E0513 08:24:58.870637 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.870830 kubelet[2219]: E0513 08:24:58.870815 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.870830 kubelet[2219]: W0513 08:24:58.870828 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.870928 kubelet[2219]: E0513 08:24:58.870836 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.871007 kubelet[2219]: E0513 08:24:58.870991 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.871007 kubelet[2219]: W0513 08:24:58.871005 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.871088 kubelet[2219]: E0513 08:24:58.871014 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.871219 kubelet[2219]: E0513 08:24:58.871204 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.871219 kubelet[2219]: W0513 08:24:58.871217 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.871286 kubelet[2219]: E0513 08:24:58.871225 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.871418 kubelet[2219]: E0513 08:24:58.871403 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.871418 kubelet[2219]: W0513 08:24:58.871415 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.871482 kubelet[2219]: E0513 08:24:58.871424 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.871630 kubelet[2219]: E0513 08:24:58.871616 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.871630 kubelet[2219]: W0513 08:24:58.871629 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.871704 kubelet[2219]: E0513 08:24:58.871638 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.871830 kubelet[2219]: E0513 08:24:58.871815 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.871830 kubelet[2219]: W0513 08:24:58.871828 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.871901 kubelet[2219]: E0513 08:24:58.871836 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.872051 kubelet[2219]: E0513 08:24:58.872035 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.872051 kubelet[2219]: W0513 08:24:58.872049 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.872122 kubelet[2219]: E0513 08:24:58.872058 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.872246 kubelet[2219]: E0513 08:24:58.872230 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.872246 kubelet[2219]: W0513 08:24:58.872243 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.872317 kubelet[2219]: E0513 08:24:58.872252 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.872439 kubelet[2219]: E0513 08:24:58.872424 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.872439 kubelet[2219]: W0513 08:24:58.872437 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.872516 kubelet[2219]: E0513 08:24:58.872446 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.950661 kubelet[2219]: E0513 08:24:58.950460 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.950661 kubelet[2219]: W0513 08:24:58.950501 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.950661 kubelet[2219]: E0513 08:24:58.950530 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.951064 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.953618 kubelet[2219]: W0513 08:24:58.951096 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.951123 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.951544 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.953618 kubelet[2219]: W0513 08:24:58.951563 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.951631 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.952008 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.953618 kubelet[2219]: W0513 08:24:58.952028 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.952054 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.953618 kubelet[2219]: E0513 08:24:58.952445 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.954282 kubelet[2219]: W0513 08:24:58.952466 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.952657 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.952881 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.954282 kubelet[2219]: W0513 08:24:58.952898 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.953035 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.953227 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.954282 kubelet[2219]: W0513 08:24:58.953244 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.953379 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.954282 kubelet[2219]: E0513 08:24:58.953621 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.954282 kubelet[2219]: W0513 08:24:58.953640 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.954897 kubelet[2219]: E0513 08:24:58.953667 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.954897 kubelet[2219]: E0513 08:24:58.954079 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.954897 kubelet[2219]: W0513 08:24:58.954099 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.954897 kubelet[2219]: E0513 08:24:58.954126 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.955150 kubelet[2219]: E0513 08:24:58.954969 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.955150 kubelet[2219]: W0513 08:24:58.954990 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.955150 kubelet[2219]: E0513 08:24:58.955130 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.955375 kubelet[2219]: E0513 08:24:58.955353 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.955375 kubelet[2219]: W0513 08:24:58.955371 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.955542 kubelet[2219]: E0513 08:24:58.955505 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.955898 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.958310 kubelet[2219]: W0513 08:24:58.955927 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.956066 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.956254 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.958310 kubelet[2219]: W0513 08:24:58.956272 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.956297 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.956985 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.958310 kubelet[2219]: W0513 08:24:58.957010 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.957042 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.958310 kubelet[2219]: E0513 08:24:58.957694 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.959003 kubelet[2219]: W0513 08:24:58.957713 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.959003 kubelet[2219]: E0513 08:24:58.957863 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.959003 kubelet[2219]: E0513 08:24:58.958061 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.959003 kubelet[2219]: W0513 08:24:58.958079 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.959003 kubelet[2219]: E0513 08:24:58.958101 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.959003 kubelet[2219]: E0513 08:24:58.958411 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.959003 kubelet[2219]: W0513 08:24:58.958430 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.959003 kubelet[2219]: E0513 08:24:58.958451 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:58.960061 kubelet[2219]: E0513 08:24:58.959634 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:58.960061 kubelet[2219]: W0513 08:24:58.959663 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:58.960061 kubelet[2219]: E0513 08:24:58.959686 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.816714 kubelet[2219]: I0513 08:24:59.816683 2219 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 08:24:59.880460 kubelet[2219]: E0513 08:24:59.880383 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.880460 kubelet[2219]: W0513 08:24:59.880432 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.880460 kubelet[2219]: E0513 08:24:59.880461 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.881068 kubelet[2219]: E0513 08:24:59.881015 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.881160 kubelet[2219]: W0513 08:24:59.881071 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.881160 kubelet[2219]: E0513 08:24:59.881094 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.881466 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913144 kubelet[2219]: W0513 08:24:59.881485 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.881506 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.881890 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913144 kubelet[2219]: W0513 08:24:59.881909 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.881929 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.882302 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913144 kubelet[2219]: W0513 08:24:59.882345 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.882365 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913144 kubelet[2219]: E0513 08:24:59.882739 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913957 kubelet[2219]: W0513 08:24:59.882757 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.882776 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.883119 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913957 kubelet[2219]: W0513 08:24:59.883157 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.883176 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.883553 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913957 kubelet[2219]: W0513 08:24:59.883570 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.883623 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.913957 kubelet[2219]: E0513 08:24:59.883974 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.913957 kubelet[2219]: W0513 08:24:59.884017 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.884036 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.884377 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.914678 kubelet[2219]: W0513 08:24:59.884419 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.884439 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.884844 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.914678 kubelet[2219]: W0513 08:24:59.884863 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.884883 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.885338 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.914678 kubelet[2219]: W0513 08:24:59.885356 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.914678 kubelet[2219]: E0513 08:24:59.885375 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.885876 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.915432 kubelet[2219]: W0513 08:24:59.885896 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.885916 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.886266 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.915432 kubelet[2219]: W0513 08:24:59.886285 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.886304 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.886714 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.915432 kubelet[2219]: W0513 08:24:59.886732 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.915432 kubelet[2219]: E0513 08:24:59.886750 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.960569 kubelet[2219]: E0513 08:24:59.959684 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.960569 kubelet[2219]: W0513 08:24:59.959728 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.960569 kubelet[2219]: E0513 08:24:59.959767 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.960569 kubelet[2219]: E0513 08:24:59.960305 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.960569 kubelet[2219]: W0513 08:24:59.960327 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.960569 kubelet[2219]: E0513 08:24:59.960362 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.961490 kubelet[2219]: E0513 08:24:59.961291 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.961490 kubelet[2219]: W0513 08:24:59.961317 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.961490 kubelet[2219]: E0513 08:24:59.961363 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.961861 kubelet[2219]: E0513 08:24:59.961806 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.961861 kubelet[2219]: W0513 08:24:59.961852 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.962060 kubelet[2219]: E0513 08:24:59.961900 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.962261 kubelet[2219]: E0513 08:24:59.962198 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.962261 kubelet[2219]: W0513 08:24:59.962228 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.962442 kubelet[2219]: E0513 08:24:59.962372 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.962789 kubelet[2219]: E0513 08:24:59.962760 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.962789 kubelet[2219]: W0513 08:24:59.962786 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.962996 kubelet[2219]: E0513 08:24:59.962927 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.963207 kubelet[2219]: E0513 08:24:59.963144 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.963369 kubelet[2219]: W0513 08:24:59.963248 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.963467 kubelet[2219]: E0513 08:24:59.963402 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.968841 kubelet[2219]: E0513 08:24:59.967749 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.968841 kubelet[2219]: W0513 08:24:59.967776 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.968841 kubelet[2219]: E0513 08:24:59.967845 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.968841 kubelet[2219]: E0513 08:24:59.968244 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.968841 kubelet[2219]: W0513 08:24:59.968265 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.968841 kubelet[2219]: E0513 08:24:59.968287 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.969365 kubelet[2219]: E0513 08:24:59.969064 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.969365 kubelet[2219]: W0513 08:24:59.969089 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.969365 kubelet[2219]: E0513 08:24:59.969126 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.969554 kubelet[2219]: E0513 08:24:59.969449 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.969554 kubelet[2219]: W0513 08:24:59.969473 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.969765 kubelet[2219]: E0513 08:24:59.969495 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.971107 kubelet[2219]: E0513 08:24:59.971059 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.971107 kubelet[2219]: W0513 08:24:59.971091 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.971357 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.971422 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.972315 kubelet[2219]: W0513 08:24:59.971440 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.971725 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.972315 kubelet[2219]: W0513 08:24:59.971756 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.971724 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.971778 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.972082 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.972315 kubelet[2219]: W0513 08:24:59.972108 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.972315 kubelet[2219]: E0513 08:24:59.972147 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.974086 kubelet[2219]: E0513 08:24:59.973389 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.974086 kubelet[2219]: W0513 08:24:59.973415 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.974086 kubelet[2219]: E0513 08:24:59.973440 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.974625 kubelet[2219]: E0513 08:24:59.974559 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.974807 kubelet[2219]: W0513 08:24:59.974777 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.975107 kubelet[2219]: E0513 08:24:59.975078 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:24:59.975454 kubelet[2219]: E0513 08:24:59.975429 2219 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 08:24:59.975722 kubelet[2219]: W0513 08:24:59.975690 2219 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 08:24:59.975883 kubelet[2219]: E0513 08:24:59.975857 2219 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 08:25:00.224069 env[1260]: time="2025-05-13T08:25:00.224032827Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:00.229249 env[1260]: time="2025-05-13T08:25:00.229143791Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:00.235309 env[1260]: time="2025-05-13T08:25:00.235241392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:00.238691 env[1260]: time="2025-05-13T08:25:00.238646577Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:00.240117 env[1260]: time="2025-05-13T08:25:00.240061789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 08:25:00.247143 env[1260]: time="2025-05-13T08:25:00.247116373Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 08:25:00.278108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231307771.mount: Deactivated successfully. May 13 08:25:00.279362 env[1260]: time="2025-05-13T08:25:00.279327036Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\"" May 13 08:25:00.282316 env[1260]: time="2025-05-13T08:25:00.282042052Z" level=info msg="StartContainer for \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\"" May 13 08:25:00.336417 systemd[1]: run-containerd-runc-k8s.io-e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0-runc.WbSWsP.mount: Deactivated successfully. May 13 08:25:00.385707 env[1260]: time="2025-05-13T08:25:00.385663645Z" level=info msg="StartContainer for \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\" returns successfully" May 13 08:25:00.676874 kubelet[2219]: E0513 08:25:00.676814 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:01.301979 kubelet[2219]: I0513 08:25:01.070146 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66f6fc9456-twd8c" podStartSLOduration=4.743170017 podStartE2EDuration="9.070083278s" podCreationTimestamp="2025-05-13 08:24:52 +0000 UTC" firstStartedPulling="2025-05-13 08:24:53.395848088 +0000 UTC m=+24.904368577" lastFinishedPulling="2025-05-13 08:24:57.722761359 +0000 UTC m=+29.231281838" observedRunningTime="2025-05-13 08:24:58.848518007 +0000 UTC m=+30.357038536" watchObservedRunningTime="2025-05-13 08:25:01.070083278 +0000 UTC m=+32.578603807" May 13 08:25:01.280489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0-rootfs.mount: Deactivated successfully. May 13 08:25:01.400043 env[1260]: time="2025-05-13T08:25:01.399890163Z" level=info msg="shim disconnected" id=e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0 May 13 08:25:01.400984 env[1260]: time="2025-05-13T08:25:01.400061594Z" level=warning msg="cleaning up after shim disconnected" id=e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0 namespace=k8s.io May 13 08:25:01.400984 env[1260]: time="2025-05-13T08:25:01.400087701Z" level=info msg="cleaning up dead shim" May 13 08:25:01.439142 env[1260]: time="2025-05-13T08:25:01.438517803Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2910 runtime=io.containerd.runc.v2\n" May 13 08:25:01.832685 env[1260]: time="2025-05-13T08:25:01.832567101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 08:25:02.681446 kubelet[2219]: E0513 08:25:02.681385 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:04.676637 kubelet[2219]: E0513 08:25:04.676424 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:05.936364 kubelet[2219]: I0513 08:25:05.935450 2219 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 08:25:06.095000 audit[2925]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:06.101757 kernel: kauditd_printk_skb: 8 callbacks suppressed May 13 08:25:06.101834 kernel: audit: type=1325 audit(1747124706.095:298): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:06.095000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9d012c50 a2=0 a3=7ffd9d012c3c items=0 ppid=2353 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:06.114010 kernel: audit: type=1300 audit(1747124706.095:298): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9d012c50 a2=0 a3=7ffd9d012c3c items=0 ppid=2353 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:06.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:06.115000 audit[2925]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:06.125514 kernel: audit: type=1327 audit(1747124706.095:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:06.125619 kernel: audit: type=1325 audit(1747124706.115:299): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:06.115000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd9d012c50 a2=0 a3=7ffd9d012c3c items=0 ppid=2353 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:06.133629 kernel: audit: type=1300 audit(1747124706.115:299): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd9d012c50 a2=0 a3=7ffd9d012c3c items=0 ppid=2353 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:06.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:06.138627 kernel: audit: type=1327 audit(1747124706.115:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:06.677141 kubelet[2219]: E0513 08:25:06.677075 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:08.678756 kubelet[2219]: E0513 08:25:08.678714 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:10.677564 kubelet[2219]: E0513 08:25:10.677476 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:11.287086 env[1260]: time="2025-05-13T08:25:11.286966878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.292442 env[1260]: time="2025-05-13T08:25:11.292369213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.299025 env[1260]: time="2025-05-13T08:25:11.298957816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.303468 env[1260]: time="2025-05-13T08:25:11.303362581Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.305669 env[1260]: time="2025-05-13T08:25:11.305561086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 08:25:11.317210 env[1260]: time="2025-05-13T08:25:11.316277555Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 08:25:11.349222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076235000.mount: Deactivated successfully. May 13 08:25:11.364921 env[1260]: time="2025-05-13T08:25:11.364885487Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\"" May 13 08:25:11.366652 env[1260]: time="2025-05-13T08:25:11.366135763Z" level=info msg="StartContainer for \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\"" May 13 08:25:11.484614 env[1260]: time="2025-05-13T08:25:11.482938677Z" level=info msg="StartContainer for \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\" returns successfully" May 13 08:25:12.341419 systemd[1]: run-containerd-runc-k8s.io-54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02-runc.ScMcgK.mount: Deactivated successfully. May 13 08:25:12.734433 kubelet[2219]: E0513 08:25:12.734388 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:13.859060 env[1260]: time="2025-05-13T08:25:13.858998956Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:25:13.887076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02-rootfs.mount: Deactivated successfully. May 13 08:25:13.949860 kubelet[2219]: I0513 08:25:13.949813 2219 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 08:25:14.102402 kubelet[2219]: I0513 08:25:14.102302 2219 topology_manager.go:215] "Topology Admit Handler" podUID="033c33ae-894a-48f0-a6ac-c8632ff173d5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fx4p8" May 13 08:25:14.141331 kubelet[2219]: I0513 08:25:14.141278 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/033c33ae-894a-48f0-a6ac-c8632ff173d5-config-volume\") pod \"coredns-7db6d8ff4d-fx4p8\" (UID: \"033c33ae-894a-48f0-a6ac-c8632ff173d5\") " pod="kube-system/coredns-7db6d8ff4d-fx4p8" May 13 08:25:14.213018 kubelet[2219]: I0513 08:25:14.212930 2219 topology_manager.go:215] "Topology Admit Handler" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m5nkr" May 13 08:25:14.221095 kubelet[2219]: I0513 08:25:14.221025 2219 topology_manager.go:215] "Topology Admit Handler" podUID="ff325600-1957-4033-960b-4591b00b1eb4" podNamespace="calico-system" podName="calico-kube-controllers-5c9b5b8b87-ffc5v" May 13 08:25:14.239434 kubelet[2219]: I0513 08:25:14.239335 2219 topology_manager.go:215] "Topology Admit Handler" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" podNamespace="calico-apiserver" podName="calico-apiserver-769cd4b6f5-q7tml" May 13 08:25:14.250269 kubelet[2219]: I0513 08:25:14.250213 2219 topology_manager.go:215] "Topology Admit Handler" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" podNamespace="calico-apiserver" podName="calico-apiserver-769cd4b6f5-h8qgj" May 13 08:25:14.252879 kubelet[2219]: I0513 08:25:14.252833 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9sq\" (UniqueName: \"kubernetes.io/projected/ff325600-1957-4033-960b-4591b00b1eb4-kube-api-access-tq9sq\") pod \"calico-kube-controllers-5c9b5b8b87-ffc5v\" (UID: \"ff325600-1957-4033-960b-4591b00b1eb4\") " pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" May 13 08:25:14.253138 kubelet[2219]: I0513 08:25:14.253094 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nmb\" (UniqueName: \"kubernetes.io/projected/033c33ae-894a-48f0-a6ac-c8632ff173d5-kube-api-access-72nmb\") pod \"coredns-7db6d8ff4d-fx4p8\" (UID: \"033c33ae-894a-48f0-a6ac-c8632ff173d5\") " pod="kube-system/coredns-7db6d8ff4d-fx4p8" May 13 08:25:14.255743 kubelet[2219]: I0513 08:25:14.255223 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssm9\" (UniqueName: \"kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9\") pod \"calico-apiserver-769cd4b6f5-q7tml\" (UID: \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\") " pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" May 13 08:25:14.255743 kubelet[2219]: I0513 08:25:14.255304 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-calico-apiserver-certs\") pod \"calico-apiserver-769cd4b6f5-q7tml\" (UID: \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\") " pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" May 13 08:25:14.255743 kubelet[2219]: I0513 08:25:14.255365 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92krp\" (UniqueName: \"kubernetes.io/projected/9b8bef73-58a7-4997-947c-91687cbacd52-kube-api-access-92krp\") pod \"coredns-7db6d8ff4d-m5nkr\" (UID: \"9b8bef73-58a7-4997-947c-91687cbacd52\") " pod="kube-system/coredns-7db6d8ff4d-m5nkr" May 13 08:25:14.255743 kubelet[2219]: I0513 08:25:14.255413 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff325600-1957-4033-960b-4591b00b1eb4-tigera-ca-bundle\") pod \"calico-kube-controllers-5c9b5b8b87-ffc5v\" (UID: \"ff325600-1957-4033-960b-4591b00b1eb4\") " pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" May 13 08:25:14.255743 kubelet[2219]: I0513 08:25:14.255492 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b8bef73-58a7-4997-947c-91687cbacd52-config-volume\") pod \"coredns-7db6d8ff4d-m5nkr\" (UID: \"9b8bef73-58a7-4997-947c-91687cbacd52\") " pod="kube-system/coredns-7db6d8ff4d-m5nkr" May 13 08:25:14.258395 kubelet[2219]: I0513 08:25:14.258317 2219 topology_manager.go:215] "Topology Admit Handler" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" podNamespace="calico-apiserver" podName="calico-apiserver-6644f7cd55-wdj2v" May 13 08:25:14.263552 kubelet[2219]: W0513 08:25:14.263513 2219 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510-3-7-n-f896a7891b.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-7-n-f896a7891b.novalocal' and this object May 13 08:25:14.263867 kubelet[2219]: E0513 08:25:14.263835 2219 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510-3-7-n-f896a7891b.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-7-n-f896a7891b.novalocal' and this object May 13 08:25:14.264688 kubelet[2219]: W0513 08:25:14.264624 2219 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-n-f896a7891b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-7-n-f896a7891b.novalocal' and this object May 13 08:25:14.266688 kubelet[2219]: E0513 08:25:14.266659 2219 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-n-f896a7891b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-7-n-f896a7891b.novalocal' and this object May 13 08:25:14.357664 kubelet[2219]: I0513 08:25:14.357565 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be3ad376-6a34-448c-8bd7-d065d8e46df2-calico-apiserver-certs\") pod \"calico-apiserver-769cd4b6f5-h8qgj\" (UID: \"be3ad376-6a34-448c-8bd7-d065d8e46df2\") " pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" May 13 08:25:14.369691 kubelet[2219]: I0513 08:25:14.368189 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtp9l\" (UniqueName: \"kubernetes.io/projected/00446d97-a96a-4df2-93a8-5f3d59494b3b-kube-api-access-wtp9l\") pod \"calico-apiserver-6644f7cd55-wdj2v\" (UID: \"00446d97-a96a-4df2-93a8-5f3d59494b3b\") " pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" May 13 08:25:14.369691 kubelet[2219]: I0513 08:25:14.368401 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8m6\" (UniqueName: \"kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6\") pod \"calico-apiserver-769cd4b6f5-h8qgj\" (UID: \"be3ad376-6a34-448c-8bd7-d065d8e46df2\") " pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" May 13 08:25:14.369691 kubelet[2219]: I0513 08:25:14.368637 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00446d97-a96a-4df2-93a8-5f3d59494b3b-calico-apiserver-certs\") pod \"calico-apiserver-6644f7cd55-wdj2v\" (UID: \"00446d97-a96a-4df2-93a8-5f3d59494b3b\") " pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" May 13 08:25:14.518982 env[1260]: time="2025-05-13T08:25:14.518810172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5nkr,Uid:9b8bef73-58a7-4997-947c-91687cbacd52,Namespace:kube-system,Attempt:0,}" May 13 08:25:14.571364 env[1260]: time="2025-05-13T08:25:14.530850261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9b5b8b87-ffc5v,Uid:ff325600-1957-4033-960b-4591b00b1eb4,Namespace:calico-system,Attempt:0,}" May 13 08:25:14.589220 env[1260]: time="2025-05-13T08:25:14.589142827Z" level=info msg="shim disconnected" id=54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02 May 13 08:25:14.589711 env[1260]: time="2025-05-13T08:25:14.589664342Z" level=warning msg="cleaning up after shim disconnected" id=54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02 namespace=k8s.io May 13 08:25:14.589890 env[1260]: time="2025-05-13T08:25:14.589855337Z" level=info msg="cleaning up dead shim" May 13 08:25:14.681133 env[1260]: time="2025-05-13T08:25:14.681094229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zls78,Uid:c3a34ed8-4b6e-4268-a42b-192aa9ef609b,Namespace:calico-system,Attempt:0,}" May 13 08:25:14.708150 env[1260]: time="2025-05-13T08:25:14.708114320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fx4p8,Uid:033c33ae-894a-48f0-a6ac-c8632ff173d5,Namespace:kube-system,Attempt:0,}" May 13 08:25:14.821385 env[1260]: time="2025-05-13T08:25:14.821248220Z" level=error msg="Failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.822128 env[1260]: time="2025-05-13T08:25:14.822090740Z" level=error msg="encountered an error cleaning up failed sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.822335 env[1260]: time="2025-05-13T08:25:14.822291363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5nkr,Uid:9b8bef73-58a7-4997-947c-91687cbacd52,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.823602 kubelet[2219]: E0513 08:25:14.822614 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.823602 kubelet[2219]: E0513 08:25:14.822717 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m5nkr" May 13 08:25:14.823602 kubelet[2219]: E0513 08:25:14.822781 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m5nkr" May 13 08:25:14.823784 kubelet[2219]: E0513 08:25:14.822826 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m5nkr_kube-system(9b8bef73-58a7-4997-947c-91687cbacd52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m5nkr_kube-system(9b8bef73-58a7-4997-947c-91687cbacd52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" May 13 08:25:14.861492 env[1260]: time="2025-05-13T08:25:14.861428052Z" level=error msg="Failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.861922 env[1260]: time="2025-05-13T08:25:14.861840086Z" level=error msg="encountered an error cleaning up failed sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.861922 env[1260]: time="2025-05-13T08:25:14.861895278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c9b5b8b87-ffc5v,Uid:ff325600-1957-4033-960b-4591b00b1eb4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.862622 kubelet[2219]: E0513 08:25:14.862181 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.862622 kubelet[2219]: E0513 08:25:14.862265 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" May 13 08:25:14.862622 kubelet[2219]: E0513 08:25:14.862307 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" May 13 08:25:14.862780 kubelet[2219]: E0513 08:25:14.862370 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c9b5b8b87-ffc5v_calico-system(ff325600-1957-4033-960b-4591b00b1eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c9b5b8b87-ffc5v_calico-system(ff325600-1957-4033-960b-4591b00b1eb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" podUID="ff325600-1957-4033-960b-4591b00b1eb4" May 13 08:25:14.867163 env[1260]: time="2025-05-13T08:25:14.867110987Z" level=error msg="Failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.867665 env[1260]: time="2025-05-13T08:25:14.867634627Z" level=error msg="encountered an error cleaning up failed sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.867828 env[1260]: time="2025-05-13T08:25:14.867785115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zls78,Uid:c3a34ed8-4b6e-4268-a42b-192aa9ef609b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.874261 kubelet[2219]: E0513 08:25:14.873537 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.874261 kubelet[2219]: E0513 08:25:14.873726 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zls78" May 13 08:25:14.874261 kubelet[2219]: E0513 08:25:14.873774 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zls78" May 13 08:25:14.875243 kubelet[2219]: E0513 08:25:14.873862 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zls78_calico-system(c3a34ed8-4b6e-4268-a42b-192aa9ef609b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zls78_calico-system(c3a34ed8-4b6e-4268-a42b-192aa9ef609b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:14.888065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435-shm.mount: Deactivated successfully. May 13 08:25:14.888198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191-shm.mount: Deactivated successfully. May 13 08:25:14.892213 kubelet[2219]: I0513 08:25:14.891687 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:25:14.894092 env[1260]: time="2025-05-13T08:25:14.894061108Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:25:14.900508 kubelet[2219]: I0513 08:25:14.897744 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:25:14.900508 kubelet[2219]: I0513 08:25:14.899324 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:25:14.900715 env[1260]: time="2025-05-13T08:25:14.898379034Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:25:14.900715 env[1260]: time="2025-05-13T08:25:14.899744183Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:25:14.910800 env[1260]: time="2025-05-13T08:25:14.909955865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 08:25:14.922374 env[1260]: time="2025-05-13T08:25:14.922309744Z" level=error msg="Failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.925017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd-shm.mount: Deactivated successfully. May 13 08:25:14.925683 env[1260]: time="2025-05-13T08:25:14.925648467Z" level=error msg="encountered an error cleaning up failed sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.925827 env[1260]: time="2025-05-13T08:25:14.925796160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fx4p8,Uid:033c33ae-894a-48f0-a6ac-c8632ff173d5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.929046 kubelet[2219]: E0513 08:25:14.927357 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.929046 kubelet[2219]: E0513 08:25:14.927433 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fx4p8" May 13 08:25:14.929046 kubelet[2219]: E0513 08:25:14.927457 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fx4p8" May 13 08:25:14.929229 kubelet[2219]: E0513 08:25:14.927528 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fx4p8_kube-system(033c33ae-894a-48f0-a6ac-c8632ff173d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fx4p8_kube-system(033c33ae-894a-48f0-a6ac-c8632ff173d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fx4p8" podUID="033c33ae-894a-48f0-a6ac-c8632ff173d5" May 13 08:25:14.961924 env[1260]: time="2025-05-13T08:25:14.961866141Z" level=error msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" failed" error="failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.962153 kubelet[2219]: E0513 08:25:14.962084 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:25:14.962453 kubelet[2219]: E0513 08:25:14.962143 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191"} May 13 08:25:14.962453 kubelet[2219]: E0513 08:25:14.962219 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:14.962453 kubelet[2219]: E0513 08:25:14.962248 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" May 13 08:25:14.991617 env[1260]: time="2025-05-13T08:25:14.991547501Z" level=error msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" failed" error="failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.992090 kubelet[2219]: E0513 08:25:14.991935 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:25:14.992090 kubelet[2219]: E0513 08:25:14.991994 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435"} May 13 08:25:14.992090 kubelet[2219]: E0513 08:25:14.992030 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:14.992090 kubelet[2219]: E0513 08:25:14.992055 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" podUID="ff325600-1957-4033-960b-4591b00b1eb4" May 13 08:25:14.999308 env[1260]: time="2025-05-13T08:25:14.999260816Z" level=error msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" failed" error="failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:14.999616 kubelet[2219]: E0513 08:25:14.999483 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:25:14.999616 kubelet[2219]: E0513 08:25:14.999514 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c"} May 13 08:25:14.999616 kubelet[2219]: E0513 08:25:14.999542 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:14.999616 kubelet[2219]: E0513 08:25:14.999563 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:15.410497 kubelet[2219]: E0513 08:25:15.410469 2219 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.410700 kubelet[2219]: E0513 08:25:15.410688 2219 projected.go:200] Error preparing data for projected volume kube-api-access-9ssm9 for pod calico-apiserver/calico-apiserver-769cd4b6f5-q7tml: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.410875 kubelet[2219]: E0513 08:25:15.410843 2219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9 podName:ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7 nodeName:}" failed. No retries permitted until 2025-05-13 08:25:15.910805272 +0000 UTC m=+47.419325751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9ssm9" (UniqueName: "kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9") pod "calico-apiserver-769cd4b6f5-q7tml" (UID: "ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7") : failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.483053 kubelet[2219]: E0513 08:25:15.483001 2219 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.483225 kubelet[2219]: E0513 08:25:15.483060 2219 projected.go:200] Error preparing data for projected volume kube-api-access-bv8m6 for pod calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.483225 kubelet[2219]: E0513 08:25:15.483169 2219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6 podName:be3ad376-6a34-448c-8bd7-d065d8e46df2 nodeName:}" failed. No retries permitted until 2025-05-13 08:25:15.983135111 +0000 UTC m=+47.491655640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bv8m6" (UniqueName: "kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6") pod "calico-apiserver-769cd4b6f5-h8qgj" (UID: "be3ad376-6a34-448c-8bd7-d065d8e46df2") : failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.486187 kubelet[2219]: E0513 08:25:15.486169 2219 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.486273 kubelet[2219]: E0513 08:25:15.486262 2219 projected.go:200] Error preparing data for projected volume kube-api-access-wtp9l for pod calico-apiserver/calico-apiserver-6644f7cd55-wdj2v: failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.486384 kubelet[2219]: E0513 08:25:15.486373 2219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00446d97-a96a-4df2-93a8-5f3d59494b3b-kube-api-access-wtp9l podName:00446d97-a96a-4df2-93a8-5f3d59494b3b nodeName:}" failed. No retries permitted until 2025-05-13 08:25:15.986355473 +0000 UTC m=+47.494875952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wtp9l" (UniqueName: "kubernetes.io/projected/00446d97-a96a-4df2-93a8-5f3d59494b3b-kube-api-access-wtp9l") pod "calico-apiserver-6644f7cd55-wdj2v" (UID: "00446d97-a96a-4df2-93a8-5f3d59494b3b") : failed to sync configmap cache: timed out waiting for the condition May 13 08:25:15.904637 kubelet[2219]: I0513 08:25:15.904530 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:15.905785 env[1260]: time="2025-05-13T08:25:15.905724441Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:25:15.974955 env[1260]: time="2025-05-13T08:25:15.974842094Z" level=error msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" failed" error="failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:15.975280 kubelet[2219]: E0513 08:25:15.975204 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:15.976038 kubelet[2219]: E0513 08:25:15.975293 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd"} May 13 08:25:15.976038 kubelet[2219]: E0513 08:25:15.975369 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:15.976038 kubelet[2219]: E0513 08:25:15.975465 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fx4p8" podUID="033c33ae-894a-48f0-a6ac-c8632ff173d5" May 13 08:25:16.070166 env[1260]: time="2025-05-13T08:25:16.070083644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-q7tml,Uid:ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7,Namespace:calico-apiserver,Attempt:0,}" May 13 08:25:16.196405 env[1260]: time="2025-05-13T08:25:16.196301360Z" level=error msg="Failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.199404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b-shm.mount: Deactivated successfully. May 13 08:25:16.199997 env[1260]: time="2025-05-13T08:25:16.199923164Z" level=error msg="encountered an error cleaning up failed sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.200126 env[1260]: time="2025-05-13T08:25:16.200096536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-q7tml,Uid:ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.201625 kubelet[2219]: E0513 08:25:16.200724 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.201625 kubelet[2219]: E0513 08:25:16.200781 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" May 13 08:25:16.201625 kubelet[2219]: E0513 08:25:16.200803 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" May 13 08:25:16.201771 kubelet[2219]: E0513 08:25:16.200879 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769cd4b6f5-q7tml_calico-apiserver(ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769cd4b6f5-q7tml_calico-apiserver(ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" May 13 08:25:16.363893 env[1260]: time="2025-05-13T08:25:16.363756884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-h8qgj,Uid:be3ad376-6a34-448c-8bd7-d065d8e46df2,Namespace:calico-apiserver,Attempt:0,}" May 13 08:25:16.372295 env[1260]: time="2025-05-13T08:25:16.372234583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-wdj2v,Uid:00446d97-a96a-4df2-93a8-5f3d59494b3b,Namespace:calico-apiserver,Attempt:0,}" May 13 08:25:16.484846 env[1260]: time="2025-05-13T08:25:16.484703824Z" level=error msg="Failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.485508 env[1260]: time="2025-05-13T08:25:16.485474304Z" level=error msg="encountered an error cleaning up failed sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.485702 env[1260]: time="2025-05-13T08:25:16.485657735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-wdj2v,Uid:00446d97-a96a-4df2-93a8-5f3d59494b3b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.486085 kubelet[2219]: E0513 08:25:16.486043 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.486163 kubelet[2219]: E0513 08:25:16.486107 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" May 13 08:25:16.486163 kubelet[2219]: E0513 08:25:16.486137 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" May 13 08:25:16.486232 kubelet[2219]: E0513 08:25:16.486180 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6644f7cd55-wdj2v_calico-apiserver(00446d97-a96a-4df2-93a8-5f3d59494b3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6644f7cd55-wdj2v_calico-apiserver(00446d97-a96a-4df2-93a8-5f3d59494b3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" May 13 08:25:16.504971 env[1260]: time="2025-05-13T08:25:16.504919464Z" level=error msg="Failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.505416 env[1260]: time="2025-05-13T08:25:16.505375511Z" level=error msg="encountered an error cleaning up failed sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.505540 env[1260]: time="2025-05-13T08:25:16.505510852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-h8qgj,Uid:be3ad376-6a34-448c-8bd7-d065d8e46df2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.505876 kubelet[2219]: E0513 08:25:16.505839 2219 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.505943 kubelet[2219]: E0513 08:25:16.505897 2219 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" May 13 08:25:16.505943 kubelet[2219]: E0513 08:25:16.505920 2219 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" May 13 08:25:16.506018 kubelet[2219]: E0513 08:25:16.505956 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-769cd4b6f5-h8qgj_calico-apiserver(be3ad376-6a34-448c-8bd7-d065d8e46df2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-769cd4b6f5-h8qgj_calico-apiserver(be3ad376-6a34-448c-8bd7-d065d8e46df2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" May 13 08:25:16.907602 kubelet[2219]: I0513 08:25:16.907289 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:25:16.909368 env[1260]: time="2025-05-13T08:25:16.909224483Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:25:16.911048 kubelet[2219]: I0513 08:25:16.910943 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:25:16.913271 env[1260]: time="2025-05-13T08:25:16.912346410Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:25:16.914819 kubelet[2219]: I0513 08:25:16.914606 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:25:16.915213 env[1260]: time="2025-05-13T08:25:16.915190790Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:25:16.988656 env[1260]: time="2025-05-13T08:25:16.988605806Z" level=error msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" failed" error="failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:16.989123 kubelet[2219]: E0513 08:25:16.988958 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:25:16.989123 kubelet[2219]: E0513 08:25:16.989016 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b"} May 13 08:25:16.989123 kubelet[2219]: E0513 08:25:16.989052 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:16.989123 kubelet[2219]: E0513 08:25:16.989078 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" May 13 08:25:17.000557 env[1260]: time="2025-05-13T08:25:17.000504707Z" level=error msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" failed" error="failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:17.001122 kubelet[2219]: E0513 08:25:17.000925 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:25:17.001122 kubelet[2219]: E0513 08:25:17.000988 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d"} May 13 08:25:17.001122 kubelet[2219]: E0513 08:25:17.001028 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:17.001122 kubelet[2219]: E0513 08:25:17.001096 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" May 13 08:25:17.019513 env[1260]: time="2025-05-13T08:25:17.019456892Z" level=error msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" failed" error="failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:17.020025 kubelet[2219]: E0513 08:25:17.019781 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:25:17.020025 kubelet[2219]: E0513 08:25:17.019845 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7"} May 13 08:25:17.020025 kubelet[2219]: E0513 08:25:17.019882 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:17.020025 kubelet[2219]: E0513 08:25:17.019968 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" May 13 08:25:25.680544 env[1260]: time="2025-05-13T08:25:25.680412326Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:25:25.747563 env[1260]: time="2025-05-13T08:25:25.746216766Z" level=error msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" failed" error="failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:25.748014 kubelet[2219]: E0513 08:25:25.747708 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:25:25.748014 kubelet[2219]: E0513 08:25:25.747771 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c"} May 13 08:25:25.748014 kubelet[2219]: E0513 08:25:25.747824 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:25.748014 kubelet[2219]: E0513 08:25:25.747856 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:26.680680 env[1260]: time="2025-05-13T08:25:26.680627478Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:25:26.743046 env[1260]: time="2025-05-13T08:25:26.742960394Z" level=error msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" failed" error="failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:26.743492 kubelet[2219]: E0513 08:25:26.743445 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:25:26.743649 kubelet[2219]: E0513 08:25:26.743511 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435"} May 13 08:25:26.743649 kubelet[2219]: E0513 08:25:26.743550 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:26.743649 kubelet[2219]: E0513 08:25:26.743595 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" podUID="ff325600-1957-4033-960b-4591b00b1eb4" May 13 08:25:27.203236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087698826.mount: Deactivated successfully. May 13 08:25:27.251702 env[1260]: time="2025-05-13T08:25:27.251660676Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:27.255172 env[1260]: time="2025-05-13T08:25:27.255144655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:27.258389 env[1260]: time="2025-05-13T08:25:27.258365200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:27.261427 env[1260]: time="2025-05-13T08:25:27.261355362Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:27.261948 env[1260]: time="2025-05-13T08:25:27.261919892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 08:25:27.288041 env[1260]: time="2025-05-13T08:25:27.287904113Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 08:25:27.322709 env[1260]: time="2025-05-13T08:25:27.322671869Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\"" May 13 08:25:27.323466 env[1260]: time="2025-05-13T08:25:27.323443227Z" level=info msg="StartContainer for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\"" May 13 08:25:27.457783 env[1260]: time="2025-05-13T08:25:27.457683477Z" level=info msg="StartContainer for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\" returns successfully" May 13 08:25:27.549256 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 08:25:27.549401 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 08:25:27.678474 env[1260]: time="2025-05-13T08:25:27.678408134Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:25:27.750691 env[1260]: time="2025-05-13T08:25:27.750420005Z" level=error msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" failed" error="failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:27.752136 kubelet[2219]: E0513 08:25:27.751789 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:25:27.752136 kubelet[2219]: E0513 08:25:27.751922 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191"} May 13 08:25:27.752136 kubelet[2219]: E0513 08:25:27.751997 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:27.752136 kubelet[2219]: E0513 08:25:27.752054 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" May 13 08:25:28.208284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d-rootfs.mount: Deactivated successfully. May 13 08:25:28.732906 env[1260]: time="2025-05-13T08:25:28.732789547Z" level=info msg="shim disconnected" id=8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d May 13 08:25:28.733394 env[1260]: time="2025-05-13T08:25:28.732807691Z" level=error msg="ExecSync for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"951eeb990aceb24f0f2eefe49bc6915783d3b341994c74868b660f028e88801f\": container not created: not found" May 13 08:25:28.733859 env[1260]: time="2025-05-13T08:25:28.733312750Z" level=warning msg="cleaning up after shim disconnected" id=8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d namespace=k8s.io May 13 08:25:28.734078 env[1260]: time="2025-05-13T08:25:28.734040056Z" level=info msg="cleaning up dead shim" May 13 08:25:28.734987 kubelet[2219]: E0513 08:25:28.734845 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"951eeb990aceb24f0f2eefe49bc6915783d3b341994c74868b660f028e88801f\": container not created: not found" containerID="8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:28.738487 env[1260]: time="2025-05-13T08:25:28.738352477Z" level=error msg="ExecSync for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d not found: not found" May 13 08:25:28.739215 kubelet[2219]: E0513 08:25:28.739130 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d not found: not found" containerID="8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:28.742410 env[1260]: time="2025-05-13T08:25:28.742320921Z" level=error msg="ExecSync for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d not found: not found" May 13 08:25:28.765618 kubelet[2219]: E0513 08:25:28.742989 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d not found: not found" containerID="8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:28.806206 env[1260]: time="2025-05-13T08:25:28.806156274Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3486 runtime=io.containerd.runc.v2\n" May 13 08:25:28.962764 kubelet[2219]: I0513 08:25:28.962714 2219 scope.go:117] "RemoveContainer" containerID="8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d" May 13 08:25:28.967872 env[1260]: time="2025-05-13T08:25:28.967689127Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 13 08:25:29.012119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895875054.mount: Deactivated successfully. May 13 08:25:29.022602 env[1260]: time="2025-05-13T08:25:29.022517522Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478\"" May 13 08:25:29.024548 env[1260]: time="2025-05-13T08:25:29.024489780Z" level=info msg="StartContainer for \"de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478\"" May 13 08:25:29.096598 env[1260]: time="2025-05-13T08:25:29.095177038Z" level=info msg="StartContainer for \"de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478\" returns successfully" May 13 08:25:29.209377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478-rootfs.mount: Deactivated successfully. May 13 08:25:29.216214 env[1260]: time="2025-05-13T08:25:29.216136855Z" level=info msg="shim disconnected" id=de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478 May 13 08:25:29.216475 env[1260]: time="2025-05-13T08:25:29.216441147Z" level=warning msg="cleaning up after shim disconnected" id=de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478 namespace=k8s.io May 13 08:25:29.216644 env[1260]: time="2025-05-13T08:25:29.216567595Z" level=info msg="cleaning up dead shim" May 13 08:25:29.225783 env[1260]: time="2025-05-13T08:25:29.225722615Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3554 runtime=io.containerd.runc.v2\n" May 13 08:25:29.678171 env[1260]: time="2025-05-13T08:25:29.678096542Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:25:29.751473 env[1260]: time="2025-05-13T08:25:29.751352048Z" level=error msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" failed" error="failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:29.752673 kubelet[2219]: E0513 08:25:29.752223 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:25:29.752673 kubelet[2219]: E0513 08:25:29.752356 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b"} May 13 08:25:29.752673 kubelet[2219]: E0513 08:25:29.752457 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:29.752673 kubelet[2219]: E0513 08:25:29.752530 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" May 13 08:25:29.971789 kubelet[2219]: I0513 08:25:29.971567 2219 scope.go:117] "RemoveContainer" containerID="8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d" May 13 08:25:29.973988 kubelet[2219]: I0513 08:25:29.973944 2219 scope.go:117] "RemoveContainer" containerID="de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478" May 13 08:25:29.976543 kubelet[2219]: E0513 08:25:29.976500 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-htsgx_calico-system(8abb2ca2-052d-4f35-a41c-c4db1f01016e)\"" pod="calico-system/calico-node-htsgx" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" May 13 08:25:29.980655 env[1260]: time="2025-05-13T08:25:29.980538485Z" level=info msg="RemoveContainer for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\"" May 13 08:25:29.988182 env[1260]: time="2025-05-13T08:25:29.988108677Z" level=info msg="RemoveContainer for \"8f6f309ecfd356dadb37386c7a61c045c9cdfa74fc944e9449abfb987357610d\" returns successfully" May 13 08:25:30.685434 env[1260]: time="2025-05-13T08:25:30.683754267Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:25:30.757404 env[1260]: time="2025-05-13T08:25:30.757282502Z" level=error msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" failed" error="failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:30.765899 kubelet[2219]: E0513 08:25:30.765809 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:25:30.766141 kubelet[2219]: E0513 08:25:30.765992 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d"} May 13 08:25:30.766141 kubelet[2219]: E0513 08:25:30.766077 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:30.766438 kubelet[2219]: E0513 08:25:30.766144 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" May 13 08:25:30.984336 kubelet[2219]: I0513 08:25:30.981872 2219 scope.go:117] "RemoveContainer" containerID="de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478" May 13 08:25:30.984336 kubelet[2219]: E0513 08:25:30.982945 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-htsgx_calico-system(8abb2ca2-052d-4f35-a41c-c4db1f01016e)\"" pod="calico-system/calico-node-htsgx" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" May 13 08:25:31.679328 env[1260]: time="2025-05-13T08:25:31.677920625Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:25:31.679328 env[1260]: time="2025-05-13T08:25:31.678091797Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:25:31.778999 env[1260]: time="2025-05-13T08:25:31.778917188Z" level=error msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" failed" error="failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:31.779475 kubelet[2219]: E0513 08:25:31.779280 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:25:31.779475 kubelet[2219]: E0513 08:25:31.779369 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7"} May 13 08:25:31.779475 kubelet[2219]: E0513 08:25:31.779406 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:31.779475 kubelet[2219]: E0513 08:25:31.779433 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" May 13 08:25:31.784189 env[1260]: time="2025-05-13T08:25:31.784122879Z" level=error msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" failed" error="failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:31.784500 kubelet[2219]: E0513 08:25:31.784343 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:31.784500 kubelet[2219]: E0513 08:25:31.784405 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd"} May 13 08:25:31.784500 kubelet[2219]: E0513 08:25:31.784433 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:31.784500 kubelet[2219]: E0513 08:25:31.784475 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fx4p8" podUID="033c33ae-894a-48f0-a6ac-c8632ff173d5" May 13 08:25:38.680775 env[1260]: time="2025-05-13T08:25:38.680148873Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:25:38.775483 env[1260]: time="2025-05-13T08:25:38.775291449Z" level=error msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" failed" error="failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:38.776475 kubelet[2219]: E0513 08:25:38.776390 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:25:38.776475 kubelet[2219]: E0513 08:25:38.776471 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191"} May 13 08:25:38.777329 kubelet[2219]: E0513 08:25:38.776524 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:38.777329 kubelet[2219]: E0513 08:25:38.776568 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" May 13 08:25:40.680658 env[1260]: time="2025-05-13T08:25:40.678864765Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:25:40.683162 env[1260]: time="2025-05-13T08:25:40.683079115Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:25:40.783085 env[1260]: time="2025-05-13T08:25:40.783028274Z" level=error msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" failed" error="failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:40.783690 kubelet[2219]: E0513 08:25:40.783466 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:25:40.783690 kubelet[2219]: E0513 08:25:40.783547 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c"} May 13 08:25:40.783690 kubelet[2219]: E0513 08:25:40.783613 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:40.783690 kubelet[2219]: E0513 08:25:40.783647 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:40.787818 env[1260]: time="2025-05-13T08:25:40.787778928Z" level=error msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" failed" error="failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:40.788122 kubelet[2219]: E0513 08:25:40.788075 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:25:40.788237 kubelet[2219]: E0513 08:25:40.788133 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435"} May 13 08:25:40.788237 kubelet[2219]: E0513 08:25:40.788173 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:40.788237 kubelet[2219]: E0513 08:25:40.788201 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" podUID="ff325600-1957-4033-960b-4591b00b1eb4" May 13 08:25:41.678561 env[1260]: time="2025-05-13T08:25:41.678432984Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:25:41.743735 env[1260]: time="2025-05-13T08:25:41.743260207Z" level=error msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" failed" error="failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:41.744548 kubelet[2219]: E0513 08:25:41.744049 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:25:41.744548 kubelet[2219]: E0513 08:25:41.744266 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b"} May 13 08:25:41.744548 kubelet[2219]: E0513 08:25:41.744431 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:41.745013 kubelet[2219]: E0513 08:25:41.744627 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" May 13 08:25:43.679243 env[1260]: time="2025-05-13T08:25:43.679139383Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:25:43.756826 env[1260]: time="2025-05-13T08:25:43.756715458Z" level=error msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" failed" error="failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:43.758476 kubelet[2219]: E0513 08:25:43.757729 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:25:43.758476 kubelet[2219]: E0513 08:25:43.757980 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7"} May 13 08:25:43.758476 kubelet[2219]: E0513 08:25:43.758185 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:43.758476 kubelet[2219]: E0513 08:25:43.758322 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" May 13 08:25:44.677649 kubelet[2219]: I0513 08:25:44.677533 2219 scope.go:117] "RemoveContainer" containerID="de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478" May 13 08:25:44.690925 env[1260]: time="2025-05-13T08:25:44.690347983Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 13 08:25:44.734312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930921573.mount: Deactivated successfully. May 13 08:25:44.760238 env[1260]: time="2025-05-13T08:25:44.760086278Z" level=info msg="CreateContainer within sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\"" May 13 08:25:44.761779 env[1260]: time="2025-05-13T08:25:44.761712834Z" level=info msg="StartContainer for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\"" May 13 08:25:44.895071 env[1260]: time="2025-05-13T08:25:44.894989742Z" level=info msg="StartContainer for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" returns successfully" May 13 08:25:45.026758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84-rootfs.mount: Deactivated successfully. May 13 08:25:45.037871 env[1260]: time="2025-05-13T08:25:45.037808757Z" level=info msg="shim disconnected" id=0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84 May 13 08:25:45.037871 env[1260]: time="2025-05-13T08:25:45.037857860Z" level=warning msg="cleaning up after shim disconnected" id=0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84 namespace=k8s.io May 13 08:25:45.037871 env[1260]: time="2025-05-13T08:25:45.037872247Z" level=info msg="cleaning up dead shim" May 13 08:25:45.045031 kubelet[2219]: I0513 08:25:45.044900 2219 status_manager.go:317] "Container readiness changed for unknown container" pod="calico-system/calico-node-htsgx" containerID="containerd://de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478" May 13 08:25:45.061940 env[1260]: time="2025-05-13T08:25:45.061882924Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" May 13 08:25:45.071809 kubelet[2219]: I0513 08:25:45.071197 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-htsgx" podStartSLOduration=19.23087138 podStartE2EDuration="53.071153181s" podCreationTimestamp="2025-05-13 08:24:52 +0000 UTC" firstStartedPulling="2025-05-13 08:24:53.424223073 +0000 UTC m=+24.932743562" lastFinishedPulling="2025-05-13 08:25:27.264504874 +0000 UTC m=+58.773025363" observedRunningTime="2025-05-13 08:25:28.088763469 +0000 UTC m=+59.597283998" watchObservedRunningTime="2025-05-13 08:25:45.071153181 +0000 UTC m=+76.579673660" May 13 08:25:45.333643 env[1260]: time="2025-05-13T08:25:45.333363369Z" level=error msg="ExecSync for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" May 13 08:25:45.334711 kubelet[2219]: E0513 08:25:45.334452 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:45.335275 env[1260]: time="2025-05-13T08:25:45.335160661Z" level=error msg="ExecSync for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" May 13 08:25:45.336046 kubelet[2219]: E0513 08:25:45.335952 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:45.336566 env[1260]: time="2025-05-13T08:25:45.336461640Z" level=error msg="ExecSync for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" May 13 08:25:45.337011 kubelet[2219]: E0513 08:25:45.336956 2219 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] May 13 08:25:45.683641 env[1260]: time="2025-05-13T08:25:45.683450144Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:25:45.684844 env[1260]: time="2025-05-13T08:25:45.683918552Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:25:45.737071 env[1260]: time="2025-05-13T08:25:45.736975906Z" level=error msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" failed" error="failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:45.738217 kubelet[2219]: E0513 08:25:45.737680 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:25:45.738217 kubelet[2219]: E0513 08:25:45.737910 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d"} May 13 08:25:45.738217 kubelet[2219]: E0513 08:25:45.738148 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:45.738653 kubelet[2219]: E0513 08:25:45.738251 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" May 13 08:25:45.758532 env[1260]: time="2025-05-13T08:25:45.758384945Z" level=error msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" failed" error="failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:45.759311 kubelet[2219]: E0513 08:25:45.759194 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:45.759462 kubelet[2219]: E0513 08:25:45.759292 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd"} May 13 08:25:45.759462 kubelet[2219]: E0513 08:25:45.759384 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:45.759462 kubelet[2219]: E0513 08:25:45.759434 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"033c33ae-894a-48f0-a6ac-c8632ff173d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fx4p8" podUID="033c33ae-894a-48f0-a6ac-c8632ff173d5" May 13 08:25:46.071249 kubelet[2219]: I0513 08:25:46.071093 2219 scope.go:117] "RemoveContainer" containerID="de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478" May 13 08:25:46.076314 env[1260]: time="2025-05-13T08:25:46.076220554Z" level=info msg="RemoveContainer for \"de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478\"" May 13 08:25:46.077457 kubelet[2219]: I0513 08:25:46.077296 2219 scope.go:117] "RemoveContainer" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" May 13 08:25:46.092678 env[1260]: time="2025-05-13T08:25:46.086980247Z" level=info msg="RemoveContainer for \"de6151da79b06bbb0157a38996c4a8609b7464e7b7fc5e56dce328b0e054a478\" returns successfully" May 13 08:25:46.095115 kubelet[2219]: E0513 08:25:46.094979 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-htsgx_calico-system(8abb2ca2-052d-4f35-a41c-c4db1f01016e)\"" pod="calico-system/calico-node-htsgx" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" May 13 08:25:47.081752 kubelet[2219]: I0513 08:25:47.081681 2219 scope.go:117] "RemoveContainer" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" May 13 08:25:47.087210 kubelet[2219]: E0513 08:25:47.087138 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-htsgx_calico-system(8abb2ca2-052d-4f35-a41c-c4db1f01016e)\"" pod="calico-system/calico-node-htsgx" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" May 13 08:25:50.165524 kubelet[2219]: I0513 08:25:50.165432 2219 scope.go:117] "RemoveContainer" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" May 13 08:25:50.167894 kubelet[2219]: E0513 08:25:50.167779 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-htsgx_calico-system(8abb2ca2-052d-4f35-a41c-c4db1f01016e)\"" pod="calico-system/calico-node-htsgx" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" May 13 08:25:50.683007 env[1260]: time="2025-05-13T08:25:50.681905305Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:25:50.777084 env[1260]: time="2025-05-13T08:25:50.776894439Z" level=error msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" failed" error="failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:50.778219 kubelet[2219]: E0513 08:25:50.777782 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:25:50.778219 kubelet[2219]: E0513 08:25:50.777991 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191"} May 13 08:25:50.778219 kubelet[2219]: E0513 08:25:50.778095 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:50.778842 kubelet[2219]: E0513 08:25:50.778722 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b8bef73-58a7-4997-947c-91687cbacd52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podUID="9b8bef73-58a7-4997-947c-91687cbacd52" May 13 08:25:53.678710 env[1260]: time="2025-05-13T08:25:53.678629288Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:25:53.779814 env[1260]: time="2025-05-13T08:25:53.779760611Z" level=error msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" failed" error="failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:53.781653 kubelet[2219]: E0513 08:25:53.780211 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:25:53.781653 kubelet[2219]: E0513 08:25:53.780277 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c"} May 13 08:25:53.781653 kubelet[2219]: E0513 08:25:53.780317 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:53.781653 kubelet[2219]: E0513 08:25:53.780353 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3a34ed8-4b6e-4268-a42b-192aa9ef609b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zls78" podUID="c3a34ed8-4b6e-4268-a42b-192aa9ef609b" May 13 08:25:53.837321 env[1260]: time="2025-05-13T08:25:53.837224094Z" level=info msg="StopContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" with timeout 300 (s)" May 13 08:25:53.838359 env[1260]: time="2025-05-13T08:25:53.838309711Z" level=info msg="Stop container \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" with signal terminated" May 13 08:25:53.916000 audit[3910]: NETFILTER_CFG table=filter:97 family=2 entries=16 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:53.916000 audit[3910]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe92f56040 a2=0 a3=7ffe92f5602c items=0 ppid=2353 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:53.931910 kernel: audit: type=1325 audit(1747124753.916:300): table=filter:97 family=2 entries=16 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:53.932053 kernel: audit: type=1300 audit(1747124753.916:300): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe92f56040 a2=0 a3=7ffe92f5602c items=0 ppid=2353 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:53.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:53.948614 kernel: audit: type=1327 audit(1747124753.916:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:53.938000 audit[3910]: NETFILTER_CFG table=nat:98 family=2 entries=22 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:53.962603 kernel: audit: type=1325 audit(1747124753.938:301): table=nat:98 family=2 entries=22 op=nft_register_rule pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:53.938000 audit[3910]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe92f56040 a2=0 a3=7ffe92f5602c items=0 ppid=2353 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:53.974616 kernel: audit: type=1300 audit(1747124753.938:301): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe92f56040 a2=0 a3=7ffe92f5602c items=0 ppid=2353 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:53.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:53.982671 kernel: audit: type=1327 audit(1747124753.938:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:53.985960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce-rootfs.mount: Deactivated successfully. May 13 08:25:53.998766 env[1260]: time="2025-05-13T08:25:53.998721247Z" level=info msg="shim disconnected" id=317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce May 13 08:25:53.999127 env[1260]: time="2025-05-13T08:25:53.999098065Z" level=warning msg="cleaning up after shim disconnected" id=317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce namespace=k8s.io May 13 08:25:53.999237 env[1260]: time="2025-05-13T08:25:53.999215297Z" level=info msg="cleaning up dead shim" May 13 08:25:54.012098 env[1260]: time="2025-05-13T08:25:54.010404612Z" level=info msg="StopPodSandbox for \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\"" May 13 08:25:54.012558 env[1260]: time="2025-05-13T08:25:54.012519933Z" level=info msg="Container to stop \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:25:54.012742 env[1260]: time="2025-05-13T08:25:54.012710174Z" level=info msg="Container to stop \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:25:54.012879 env[1260]: time="2025-05-13T08:25:54.012852666Z" level=info msg="Container to stop \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:25:54.016810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2-shm.mount: Deactivated successfully. May 13 08:25:54.086766 env[1260]: time="2025-05-13T08:25:54.086728413Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3917 runtime=io.containerd.runc.v2\n" May 13 08:25:54.092034 env[1260]: time="2025-05-13T08:25:54.091842816Z" level=info msg="StopContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" returns successfully" May 13 08:25:54.093229 env[1260]: time="2025-05-13T08:25:54.093197176Z" level=info msg="StopPodSandbox for \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\"" May 13 08:25:54.093435 env[1260]: time="2025-05-13T08:25:54.093394843Z" level=info msg="Container to stop \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:25:54.097252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288-shm.mount: Deactivated successfully. May 13 08:25:54.106140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2-rootfs.mount: Deactivated successfully. May 13 08:25:54.112143 env[1260]: time="2025-05-13T08:25:54.112100799Z" level=info msg="shim disconnected" id=bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2 May 13 08:25:54.113045 env[1260]: time="2025-05-13T08:25:54.113023547Z" level=warning msg="cleaning up after shim disconnected" id=bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2 namespace=k8s.io May 13 08:25:54.113211 env[1260]: time="2025-05-13T08:25:54.113193251Z" level=info msg="cleaning up dead shim" May 13 08:25:54.149342 env[1260]: time="2025-05-13T08:25:54.149286920Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n" May 13 08:25:54.150126 env[1260]: time="2025-05-13T08:25:54.150088307Z" level=info msg="TearDown network for sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" successfully" May 13 08:25:54.150296 env[1260]: time="2025-05-13T08:25:54.150266206Z" level=info msg="StopPodSandbox for \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" returns successfully" May 13 08:25:54.167732 env[1260]: time="2025-05-13T08:25:54.166219689Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:25:54.199713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288-rootfs.mount: Deactivated successfully. May 13 08:25:54.211532 env[1260]: time="2025-05-13T08:25:54.211488368Z" level=info msg="shim disconnected" id=ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288 May 13 08:25:54.217088 env[1260]: time="2025-05-13T08:25:54.217038392Z" level=warning msg="cleaning up after shim disconnected" id=ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288 namespace=k8s.io May 13 08:25:54.217283 env[1260]: time="2025-05-13T08:25:54.217259432Z" level=info msg="cleaning up dead shim" May 13 08:25:54.244686 env[1260]: time="2025-05-13T08:25:54.244642739Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3991 runtime=io.containerd.runc.v2\n" May 13 08:25:54.245184 env[1260]: time="2025-05-13T08:25:54.245157229Z" level=info msg="TearDown network for sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" successfully" May 13 08:25:54.245270 env[1260]: time="2025-05-13T08:25:54.245250787Z" level=info msg="StopPodSandbox for \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" returns successfully" May 13 08:25:54.282191 kubelet[2219]: I0513 08:25:54.282160 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8dbb72fc-3576-4fb0-a420-2012a4770e14-typha-certs\") pod \"8dbb72fc-3576-4fb0-a420-2012a4770e14\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " May 13 08:25:54.282853 kubelet[2219]: I0513 08:25:54.282835 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-flexvol-driver-host\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283090 kubelet[2219]: I0513 08:25:54.283073 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcc2n\" (UniqueName: \"kubernetes.io/projected/8dbb72fc-3576-4fb0-a420-2012a4770e14-kube-api-access-rcc2n\") pod \"8dbb72fc-3576-4fb0-a420-2012a4770e14\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " May 13 08:25:54.283182 kubelet[2219]: I0513 08:25:54.283166 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-run-calico\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283269 kubelet[2219]: I0513 08:25:54.283255 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8fbp\" (UniqueName: \"kubernetes.io/projected/8abb2ca2-052d-4f35-a41c-c4db1f01016e-kube-api-access-k8fbp\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283350 kubelet[2219]: I0513 08:25:54.283338 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-net-dir\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283447 kubelet[2219]: I0513 08:25:54.283433 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8abb2ca2-052d-4f35-a41c-c4db1f01016e-node-certs\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283588 kubelet[2219]: I0513 08:25:54.283553 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbb72fc-3576-4fb0-a420-2012a4770e14-tigera-ca-bundle\") pod \"8dbb72fc-3576-4fb0-a420-2012a4770e14\" (UID: \"8dbb72fc-3576-4fb0-a420-2012a4770e14\") " May 13 08:25:54.283681 kubelet[2219]: I0513 08:25:54.283665 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-bin-dir\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283762 kubelet[2219]: I0513 08:25:54.283749 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-lib-calico\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283841 kubelet[2219]: I0513 08:25:54.283828 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-xtables-lock\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.283921 kubelet[2219]: I0513 08:25:54.283908 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-lib-modules\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.284069 kubelet[2219]: I0513 08:25:54.284036 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-policysync\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.284167 kubelet[2219]: I0513 08:25:54.284153 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-log-dir\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.284270 kubelet[2219]: I0513 08:25:54.284256 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8abb2ca2-052d-4f35-a41c-c4db1f01016e-tigera-ca-bundle\") pod \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\" (UID: \"8abb2ca2-052d-4f35-a41c-c4db1f01016e\") " May 13 08:25:54.285310 kubelet[2219]: I0513 08:25:54.285281 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.285455 kubelet[2219]: I0513 08:25:54.285437 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.285567 kubelet[2219]: I0513 08:25:54.285544 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.285680 kubelet[2219]: I0513 08:25:54.285664 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.285770 kubelet[2219]: I0513 08:25:54.285755 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-policysync" (OuterVolumeSpecName: "policysync") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.306118 kubelet[2219]: I0513 08:25:54.285982 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.306325 kubelet[2219]: I0513 08:25:54.287706 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.319531 kubelet[2219]: I0513 08:25:54.287725 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.330153 kubelet[2219]: I0513 08:25:54.330118 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8abb2ca2-052d-4f35-a41c-c4db1f01016e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:25:54.358308 kubelet[2219]: I0513 08:25:54.332669 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:25:54.358308 kubelet[2219]: I0513 08:25:54.348031 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dbb72fc-3576-4fb0-a420-2012a4770e14-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "8dbb72fc-3576-4fb0-a420-2012a4770e14" (UID: "8dbb72fc-3576-4fb0-a420-2012a4770e14"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:25:54.358879 kubelet[2219]: I0513 08:25:54.358822 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbb72fc-3576-4fb0-a420-2012a4770e14-kube-api-access-rcc2n" (OuterVolumeSpecName: "kube-api-access-rcc2n") pod "8dbb72fc-3576-4fb0-a420-2012a4770e14" (UID: "8dbb72fc-3576-4fb0-a420-2012a4770e14"). InnerVolumeSpecName "kube-api-access-rcc2n". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:25:54.359240 env[1260]: time="2025-05-13T08:25:54.359157696Z" level=error msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" failed" error="failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:54.360390 kubelet[2219]: I0513 08:25:54.359173 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8abb2ca2-052d-4f35-a41c-c4db1f01016e-kube-api-access-k8fbp" (OuterVolumeSpecName: "kube-api-access-k8fbp") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "kube-api-access-k8fbp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:25:54.360390 kubelet[2219]: E0513 08:25:54.360283 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:25:54.360543 kubelet[2219]: E0513 08:25:54.360433 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435"} May 13 08:25:54.360632 kubelet[2219]: E0513 08:25:54.360565 2219 kubelet.go:2040] failed to "KillPodSandbox" for "ff325600-1957-4033-960b-4591b00b1eb4" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:54.360692 kubelet[2219]: E0513 08:25:54.360635 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff325600-1957-4033-960b-4591b00b1eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c9b5b8b87-ffc5v" podUID="ff325600-1957-4033-960b-4591b00b1eb4" May 13 08:25:54.360816 kubelet[2219]: I0513 08:25:54.347796 2219 topology_manager.go:215] "Topology Admit Handler" podUID="9b0b7d42-f7e5-46f4-be54-6bf801e08070" podNamespace="calico-system" podName="calico-node-ljsjl" May 13 08:25:54.361075 kubelet[2219]: E0513 08:25:54.361046 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="flexvol-driver" May 13 08:25:54.361166 kubelet[2219]: E0513 08:25:54.361154 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8dbb72fc-3576-4fb0-a420-2012a4770e14" containerName="calico-typha" May 13 08:25:54.361246 kubelet[2219]: E0513 08:25:54.361235 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="install-cni" May 13 08:25:54.363711 kubelet[2219]: E0513 08:25:54.363691 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.363838 kubelet[2219]: E0513 08:25:54.363825 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.363907 kubelet[2219]: E0513 08:25:54.363896 2219 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.364061 kubelet[2219]: I0513 08:25:54.364047 2219 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbb72fc-3576-4fb0-a420-2012a4770e14" containerName="calico-typha" May 13 08:25:54.364132 kubelet[2219]: I0513 08:25:54.364121 2219 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.364253 kubelet[2219]: I0513 08:25:54.364240 2219 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.364382 kubelet[2219]: I0513 08:25:54.364367 2219 memory_manager.go:354] "RemoveStaleState removing state" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" containerName="calico-node" May 13 08:25:54.371022 kubelet[2219]: I0513 08:25:54.370990 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbb72fc-3576-4fb0-a420-2012a4770e14-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8dbb72fc-3576-4fb0-a420-2012a4770e14" (UID: "8dbb72fc-3576-4fb0-a420-2012a4770e14"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:25:54.384922 kubelet[2219]: I0513 08:25:54.384879 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-xtables-lock\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.386823 kubelet[2219]: I0513 08:25:54.385803 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-cni-net-dir\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.386823 kubelet[2219]: I0513 08:25:54.385917 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0b7d42-f7e5-46f4-be54-6bf801e08070-tigera-ca-bundle\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.386823 kubelet[2219]: I0513 08:25:54.385961 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b0b7d42-f7e5-46f4-be54-6bf801e08070-node-certs\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.386823 kubelet[2219]: I0513 08:25:54.385992 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-cni-log-dir\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.386823 kubelet[2219]: I0513 08:25:54.386037 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ljx\" (UniqueName: \"kubernetes.io/projected/9b0b7d42-f7e5-46f4-be54-6bf801e08070-kube-api-access-f9ljx\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387045 kubelet[2219]: I0513 08:25:54.386066 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-flexvol-driver-host\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387045 kubelet[2219]: I0513 08:25:54.386090 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-var-run-calico\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387045 kubelet[2219]: I0513 08:25:54.386130 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-var-lib-calico\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387045 kubelet[2219]: I0513 08:25:54.386159 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-cni-bin-dir\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387045 kubelet[2219]: I0513 08:25:54.386207 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-lib-modules\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386244 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b0b7d42-f7e5-46f4-be54-6bf801e08070-policysync\") pod \"calico-node-ljsjl\" (UID: \"9b0b7d42-f7e5-46f4-be54-6bf801e08070\") " pod="calico-system/calico-node-ljsjl" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386301 2219 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k8fbp\" (UniqueName: \"kubernetes.io/projected/8abb2ca2-052d-4f35-a41c-c4db1f01016e-kube-api-access-k8fbp\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386316 2219 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-net-dir\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386328 2219 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dbb72fc-3576-4fb0-a420-2012a4770e14-tigera-ca-bundle\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386338 2219 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-bin-dir\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386348 2219 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-lib-calico\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387227 kubelet[2219]: I0513 08:25:54.386375 2219 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-xtables-lock\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386395 2219 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-lib-modules\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386411 2219 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-policysync\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386428 2219 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-cni-log-dir\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386469 2219 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8abb2ca2-052d-4f35-a41c-c4db1f01016e-tigera-ca-bundle\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386489 2219 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8dbb72fc-3576-4fb0-a420-2012a4770e14-typha-certs\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386518 2219 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-flexvol-driver-host\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387489 kubelet[2219]: I0513 08:25:54.386560 2219 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rcc2n\" (UniqueName: \"kubernetes.io/projected/8dbb72fc-3576-4fb0-a420-2012a4770e14-kube-api-access-rcc2n\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.387774 kubelet[2219]: I0513 08:25:54.386591 2219 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8abb2ca2-052d-4f35-a41c-c4db1f01016e-var-run-calico\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.389760 kubelet[2219]: I0513 08:25:54.389730 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8abb2ca2-052d-4f35-a41c-c4db1f01016e-node-certs" (OuterVolumeSpecName: "node-certs") pod "8abb2ca2-052d-4f35-a41c-c4db1f01016e" (UID: "8abb2ca2-052d-4f35-a41c-c4db1f01016e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:25:54.487511 kubelet[2219]: I0513 08:25:54.487339 2219 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8abb2ca2-052d-4f35-a41c-c4db1f01016e-node-certs\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:25:54.518000 audit[4021]: NETFILTER_CFG table=filter:99 family=2 entries=17 op=nft_register_rule pid=4021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:54.518000 audit[4021]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fff86de78a0 a2=0 a3=7fff86de788c items=0 ppid=2353 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:54.533138 kernel: audit: type=1325 audit(1747124754.518:302): table=filter:99 family=2 entries=17 op=nft_register_rule pid=4021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:54.533234 kernel: audit: type=1300 audit(1747124754.518:302): arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fff86de78a0 a2=0 a3=7fff86de788c items=0 ppid=2353 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:54.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:54.540601 kernel: audit: type=1327 audit(1747124754.518:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:54.540800 kernel: audit: type=1325 audit(1747124754.532:303): table=nat:100 family=2 entries=19 op=nft_unregister_chain pid=4021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:54.532000 audit[4021]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_unregister_chain pid=4021 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:54.532000 audit[4021]: SYSCALL arch=c000003e syscall=46 success=yes exit=2956 a0=3 a1=7fff86de78a0 a2=0 a3=0 items=0 ppid=2353 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:54.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:54.680621 env[1260]: time="2025-05-13T08:25:54.680084710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ljsjl,Uid:9b0b7d42-f7e5-46f4-be54-6bf801e08070,Namespace:calico-system,Attempt:0,}" May 13 08:25:54.701979 env[1260]: time="2025-05-13T08:25:54.701897605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:25:54.702126 env[1260]: time="2025-05-13T08:25:54.701968139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:25:54.702126 env[1260]: time="2025-05-13T08:25:54.701994429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:25:54.702261 env[1260]: time="2025-05-13T08:25:54.702214939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e pid=4030 runtime=io.containerd.runc.v2 May 13 08:25:54.794397 env[1260]: time="2025-05-13T08:25:54.793738997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ljsjl,Uid:9b0b7d42-f7e5-46f4-be54-6bf801e08070,Namespace:calico-system,Attempt:0,} returns sandbox id \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\"" May 13 08:25:54.798832 env[1260]: time="2025-05-13T08:25:54.798643771Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 08:25:54.818620 env[1260]: time="2025-05-13T08:25:54.816980624Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09\"" May 13 08:25:54.819008 env[1260]: time="2025-05-13T08:25:54.818974973Z" level=info msg="StartContainer for \"b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09\"" May 13 08:25:54.832002 kubelet[2219]: I0513 08:25:54.831484 2219 topology_manager.go:215] "Topology Admit Handler" podUID="31725fa4-5099-484d-9e50-9e0df44eceeb" podNamespace="calico-system" podName="calico-typha-75f94f5ccb-wcptg" May 13 08:25:54.890502 kubelet[2219]: I0513 08:25:54.890318 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31725fa4-5099-484d-9e50-9e0df44eceeb-tigera-ca-bundle\") pod \"calico-typha-75f94f5ccb-wcptg\" (UID: \"31725fa4-5099-484d-9e50-9e0df44eceeb\") " pod="calico-system/calico-typha-75f94f5ccb-wcptg" May 13 08:25:54.890502 kubelet[2219]: I0513 08:25:54.890361 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfzl\" (UniqueName: \"kubernetes.io/projected/31725fa4-5099-484d-9e50-9e0df44eceeb-kube-api-access-rvfzl\") pod \"calico-typha-75f94f5ccb-wcptg\" (UID: \"31725fa4-5099-484d-9e50-9e0df44eceeb\") " pod="calico-system/calico-typha-75f94f5ccb-wcptg" May 13 08:25:54.890502 kubelet[2219]: I0513 08:25:54.890408 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/31725fa4-5099-484d-9e50-9e0df44eceeb-typha-certs\") pod \"calico-typha-75f94f5ccb-wcptg\" (UID: \"31725fa4-5099-484d-9e50-9e0df44eceeb\") " pod="calico-system/calico-typha-75f94f5ccb-wcptg" May 13 08:25:54.923924 env[1260]: time="2025-05-13T08:25:54.923839527Z" level=info msg="StartContainer for \"b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09\" returns successfully" May 13 08:25:54.989862 systemd[1]: var-lib-kubelet-pods-8abb2ca2\x2d052d\x2d4f35\x2da41c\x2dc4db1f01016e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 08:25:54.990020 systemd[1]: var-lib-kubelet-pods-8dbb72fc\x2d3576\x2d4fb0\x2da420\x2d2012a4770e14-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 13 08:25:54.990180 systemd[1]: var-lib-kubelet-pods-8abb2ca2\x2d052d\x2d4f35\x2da41c\x2dc4db1f01016e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8fbp.mount: Deactivated successfully. May 13 08:25:54.990345 systemd[1]: var-lib-kubelet-pods-8abb2ca2\x2d052d\x2d4f35\x2da41c\x2dc4db1f01016e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 08:25:54.990502 systemd[1]: var-lib-kubelet-pods-8dbb72fc\x2d3576\x2d4fb0\x2da420\x2d2012a4770e14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcc2n.mount: Deactivated successfully. May 13 08:25:54.990623 systemd[1]: var-lib-kubelet-pods-8dbb72fc\x2d3576\x2d4fb0\x2da420\x2d2012a4770e14-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 13 08:25:55.002502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09-rootfs.mount: Deactivated successfully. May 13 08:25:55.033047 env[1260]: time="2025-05-13T08:25:55.030963823Z" level=info msg="shim disconnected" id=b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09 May 13 08:25:55.033047 env[1260]: time="2025-05-13T08:25:55.031023005Z" level=warning msg="cleaning up after shim disconnected" id=b12a8be636c470ce52962a868d5257990f03074c9dfcee0f743096bb1868ca09 namespace=k8s.io May 13 08:25:55.033047 env[1260]: time="2025-05-13T08:25:55.031035049Z" level=info msg="cleaning up dead shim" May 13 08:25:55.046023 env[1260]: time="2025-05-13T08:25:55.045922489Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4115 runtime=io.containerd.runc.v2\n" May 13 08:25:55.116072 kubelet[2219]: I0513 08:25:55.116050 2219 scope.go:117] "RemoveContainer" containerID="0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84" May 13 08:25:55.122183 env[1260]: time="2025-05-13T08:25:55.121605679Z" level=info msg="RemoveContainer for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\"" May 13 08:25:55.127031 env[1260]: time="2025-05-13T08:25:55.126984990Z" level=info msg="RemoveContainer for \"0c13da2113a8c2e5f72aa181387cf926fafd744906a652252d9b3dddd098ce84\" returns successfully" May 13 08:25:55.127402 kubelet[2219]: I0513 08:25:55.127357 2219 scope.go:117] "RemoveContainer" containerID="54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02" May 13 08:25:55.129940 env[1260]: time="2025-05-13T08:25:55.129906468Z" level=info msg="RemoveContainer for \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\"" May 13 08:25:55.140958 env[1260]: time="2025-05-13T08:25:55.140867055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f94f5ccb-wcptg,Uid:31725fa4-5099-484d-9e50-9e0df44eceeb,Namespace:calico-system,Attempt:0,}" May 13 08:25:55.147860 env[1260]: time="2025-05-13T08:25:55.146606183Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 08:25:55.152377 env[1260]: time="2025-05-13T08:25:55.152329740Z" level=info msg="RemoveContainer for \"54fb24f184e08bb0681c022143f063dea111941ac033fa610cd14638d1d55f02\" returns successfully" May 13 08:25:55.152845 kubelet[2219]: I0513 08:25:55.152799 2219 scope.go:117] "RemoveContainer" containerID="e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0" May 13 08:25:55.184132 env[1260]: time="2025-05-13T08:25:55.176490343Z" level=info msg="RemoveContainer for \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\"" May 13 08:25:55.192427 env[1260]: time="2025-05-13T08:25:55.192359825Z" level=info msg="RemoveContainer for \"e94c1cf4645aea2a4aabcb0167f329eaf41b640bdf40176a96e208bb5218a6f0\" returns successfully" May 13 08:25:55.193041 kubelet[2219]: I0513 08:25:55.192908 2219 scope.go:117] "RemoveContainer" containerID="317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce" May 13 08:25:55.196492 env[1260]: time="2025-05-13T08:25:55.196452906Z" level=info msg="RemoveContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\"" May 13 08:25:55.207354 env[1260]: time="2025-05-13T08:25:55.206503349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:25:55.207354 env[1260]: time="2025-05-13T08:25:55.206599412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:25:55.207354 env[1260]: time="2025-05-13T08:25:55.206622316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:25:55.207354 env[1260]: time="2025-05-13T08:25:55.206835061Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9eea3954de832e26ae422849a43eeed264f3ea55df0dc437bd1c51725937bb0 pid=4141 runtime=io.containerd.runc.v2 May 13 08:25:55.221538 env[1260]: time="2025-05-13T08:25:55.221403785Z" level=info msg="RemoveContainer for \"317125348d996f5cb55dccc16b27e3a2ff074974b59a80d7c18d439bc31742ce\" returns successfully" May 13 08:25:55.222201 env[1260]: time="2025-05-13T08:25:55.222155197Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409\"" May 13 08:25:55.225006 env[1260]: time="2025-05-13T08:25:55.224888646Z" level=info msg="StartContainer for \"d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409\"" May 13 08:25:55.350057 env[1260]: time="2025-05-13T08:25:55.349852850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f94f5ccb-wcptg,Uid:31725fa4-5099-484d-9e50-9e0df44eceeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9eea3954de832e26ae422849a43eeed264f3ea55df0dc437bd1c51725937bb0\"" May 13 08:25:55.370208 env[1260]: time="2025-05-13T08:25:55.370122107Z" level=info msg="CreateContainer within sandbox \"b9eea3954de832e26ae422849a43eeed264f3ea55df0dc437bd1c51725937bb0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 08:25:55.400682 env[1260]: time="2025-05-13T08:25:55.398906602Z" level=info msg="CreateContainer within sandbox \"b9eea3954de832e26ae422849a43eeed264f3ea55df0dc437bd1c51725937bb0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62b41d4516cf5b8de3cec38fb740918a3535a6d20166bd1ce908be74de000eb2\"" May 13 08:25:55.401499 env[1260]: time="2025-05-13T08:25:55.401456441Z" level=info msg="StartContainer for \"62b41d4516cf5b8de3cec38fb740918a3535a6d20166bd1ce908be74de000eb2\"" May 13 08:25:55.405313 env[1260]: time="2025-05-13T08:25:55.405270590Z" level=info msg="StartContainer for \"d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409\" returns successfully" May 13 08:25:55.510160 env[1260]: time="2025-05-13T08:25:55.510104122Z" level=info msg="StartContainer for \"62b41d4516cf5b8de3cec38fb740918a3535a6d20166bd1ce908be74de000eb2\" returns successfully" May 13 08:25:55.564000 audit[4244]: NETFILTER_CFG table=filter:101 family=2 entries=18 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:55.564000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffecb8e6bc0 a2=0 a3=7ffecb8e6bac items=0 ppid=2353 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:55.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:55.570000 audit[4244]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:55.570000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffecb8e6bc0 a2=0 a3=0 items=0 ppid=2353 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:55.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:55.677743 env[1260]: time="2025-05-13T08:25:55.677704899Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:25:55.764604 env[1260]: time="2025-05-13T08:25:55.764504904Z" level=error msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" failed" error="failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:55.771651 kubelet[2219]: E0513 08:25:55.771234 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:25:55.771651 kubelet[2219]: E0513 08:25:55.771344 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b"} May 13 08:25:55.771651 kubelet[2219]: E0513 08:25:55.771433 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:55.771651 kubelet[2219]: E0513 08:25:55.771488 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" May 13 08:25:56.211000 audit[4267]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:56.211000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffec7c66fb0 a2=0 a3=7ffec7c66f9c items=0 ppid=2353 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:56.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:56.216000 audit[4267]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:25:56.216000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffec7c66fb0 a2=0 a3=7ffec7c66f9c items=0 ppid=2353 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:56.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:25:56.679734 env[1260]: time="2025-05-13T08:25:56.679094616Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:25:56.689745 kubelet[2219]: I0513 08:25:56.689710 2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8abb2ca2-052d-4f35-a41c-c4db1f01016e" path="/var/lib/kubelet/pods/8abb2ca2-052d-4f35-a41c-c4db1f01016e/volumes" May 13 08:25:56.691498 kubelet[2219]: I0513 08:25:56.691481 2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dbb72fc-3576-4fb0-a420-2012a4770e14" path="/var/lib/kubelet/pods/8dbb72fc-3576-4fb0-a420-2012a4770e14/volumes" May 13 08:25:56.744898 env[1260]: time="2025-05-13T08:25:56.744817896Z" level=error msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" failed" error="failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:56.745970 kubelet[2219]: E0513 08:25:56.745612 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:25:56.745970 kubelet[2219]: E0513 08:25:56.745733 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7"} May 13 08:25:56.745970 kubelet[2219]: E0513 08:25:56.745859 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:56.745970 kubelet[2219]: E0513 08:25:56.745901 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" May 13 08:25:57.125966 env[1260]: time="2025-05-13T08:25:57.125920959Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/10-calico.conflist\": WRITE)" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 13 08:25:57.152286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409-rootfs.mount: Deactivated successfully. May 13 08:25:57.176149 env[1260]: time="2025-05-13T08:25:57.175472381Z" level=info msg="shim disconnected" id=d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409 May 13 08:25:57.177836 env[1260]: time="2025-05-13T08:25:57.177811662Z" level=warning msg="cleaning up after shim disconnected" id=d308041fb2c5de5bcf550f417574ec5f73f2e35a0eb34fcbe2cb63391d3eb409 namespace=k8s.io May 13 08:25:57.177928 env[1260]: time="2025-05-13T08:25:57.177911623Z" level=info msg="cleaning up dead shim" May 13 08:25:57.186794 env[1260]: time="2025-05-13T08:25:57.186723637Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4298 runtime=io.containerd.runc.v2\n" May 13 08:25:57.678670 env[1260]: time="2025-05-13T08:25:57.678541103Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:25:57.771091 env[1260]: time="2025-05-13T08:25:57.771032368Z" level=error msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" failed" error="failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 08:25:57.771731 kubelet[2219]: E0513 08:25:57.771515 2219 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:25:57.771731 kubelet[2219]: E0513 08:25:57.771612 2219 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d"} May 13 08:25:57.771731 kubelet[2219]: E0513 08:25:57.771652 2219 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 08:25:57.771731 kubelet[2219]: E0513 08:25:57.771698 2219 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00446d97-a96a-4df2-93a8-5f3d59494b3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podUID="00446d97-a96a-4df2-93a8-5f3d59494b3b" May 13 08:25:58.249831 kubelet[2219]: I0513 08:25:58.244382 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75f94f5ccb-wcptg" podStartSLOduration=5.244255388 podStartE2EDuration="5.244255388s" podCreationTimestamp="2025-05-13 08:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:25:56.183985093 +0000 UTC m=+87.692505592" watchObservedRunningTime="2025-05-13 08:25:58.244255388 +0000 UTC m=+89.752775877" May 13 08:25:58.256837 env[1260]: time="2025-05-13T08:25:58.256406447Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 08:25:58.281867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938936584.mount: Deactivated successfully. May 13 08:25:58.288695 env[1260]: time="2025-05-13T08:25:58.288628822Z" level=info msg="CreateContainer within sandbox \"be6abaa2862ecf278899c1094ee962eb2cc159b676f4ac22a37a180cd1c3506e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84\"" May 13 08:25:58.292654 env[1260]: time="2025-05-13T08:25:58.291884311Z" level=info msg="StartContainer for \"a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84\"" May 13 08:25:58.394156 env[1260]: time="2025-05-13T08:25:58.392626240Z" level=info msg="StartContainer for \"a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84\" returns successfully" May 13 08:25:58.680404 env[1260]: time="2025-05-13T08:25:58.680335242Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.787 [INFO][4394] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.787 [INFO][4394] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" iface="eth0" netns="/var/run/netns/cni-996967ce-2b20-2a41-900d-00e3714ee21a" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.787 [INFO][4394] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" iface="eth0" netns="/var/run/netns/cni-996967ce-2b20-2a41-900d-00e3714ee21a" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.788 [INFO][4394] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" iface="eth0" netns="/var/run/netns/cni-996967ce-2b20-2a41-900d-00e3714ee21a" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.788 [INFO][4394] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.788 [INFO][4394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.846 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.846 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.846 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.853 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.853 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.856 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:25:58.858948 env[1260]: 2025-05-13 08:25:58.857 [INFO][4394] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:25:58.860189 env[1260]: time="2025-05-13T08:25:58.860144423Z" level=info msg="TearDown network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" successfully" May 13 08:25:58.860324 env[1260]: time="2025-05-13T08:25:58.860282757Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" returns successfully" May 13 08:25:58.861676 env[1260]: time="2025-05-13T08:25:58.861632592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fx4p8,Uid:033c33ae-894a-48f0-a6ac-c8632ff173d5,Namespace:kube-system,Attempt:1,}" May 13 08:25:59.078140 systemd-networkd[1030]: cali0b937709769: Link UP May 13 08:25:59.084112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:25:59.084265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0b937709769: link becomes ready May 13 08:25:59.083256 systemd-networkd[1030]: cali0b937709769: Gained carrier May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.909 [INFO][4420] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.928 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0 coredns-7db6d8ff4d- kube-system 033c33ae-894a-48f0-a6ac-c8632ff173d5 1022 0 2025-05-13 08:24:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal coredns-7db6d8ff4d-fx4p8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0b937709769 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.928 [INFO][4420] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.980 [INFO][4433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" HandleID="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.995 [INFO][4433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" HandleID="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"coredns-7db6d8ff4d-fx4p8", "timestamp":"2025-05-13 08:25:58.980778636 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.996 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.996 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.996 [INFO][4433] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:58.999 [INFO][4433] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.007 [INFO][4433] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.016 [INFO][4433] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.020 [INFO][4433] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.023 [INFO][4433] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.023 [INFO][4433] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.025 [INFO][4433] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125 May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.036 [INFO][4433] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.047 [INFO][4433] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.1/26] block=192.168.24.0/26 handle="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.047 [INFO][4433] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.1/26] handle="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.047 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:25:59.114191 env[1260]: 2025-05-13 08:25:59.047 [INFO][4433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.1/26] IPv6=[] ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" HandleID="k8s-pod-network.433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.052 [INFO][4420] cni-plugin/k8s.go 386: Populated endpoint ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033c33ae-894a-48f0-a6ac-c8632ff173d5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-fx4p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b937709769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.052 [INFO][4420] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.1/32] ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.053 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b937709769 ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.092 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.092 [INFO][4420] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033c33ae-894a-48f0-a6ac-c8632ff173d5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125", Pod:"coredns-7db6d8ff4d-fx4p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b937709769", MAC:"4e:77:c4:f8:61:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:25:59.115026 env[1260]: 2025-05-13 08:25:59.112 [INFO][4420] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fx4p8" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:25:59.142344 env[1260]: time="2025-05-13T08:25:59.142213016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:25:59.142344 env[1260]: time="2025-05-13T08:25:59.142305113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:25:59.142629 env[1260]: time="2025-05-13T08:25:59.142319981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:25:59.142708 env[1260]: time="2025-05-13T08:25:59.142642066Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125 pid=4459 runtime=io.containerd.runc.v2 May 13 08:25:59.208853 systemd[1]: run-netns-cni\x2d996967ce\x2d2b20\x2d2a41\x2d900d\x2d00e3714ee21a.mount: Deactivated successfully. May 13 08:25:59.250429 env[1260]: time="2025-05-13T08:25:59.250368077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fx4p8,Uid:033c33ae-894a-48f0-a6ac-c8632ff173d5,Namespace:kube-system,Attempt:1,} returns sandbox id \"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125\"" May 13 08:25:59.259777 env[1260]: time="2025-05-13T08:25:59.258571220Z" level=info msg="CreateContainer within sandbox \"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 08:25:59.273257 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.9z7u1A.mount: Deactivated successfully. May 13 08:25:59.303376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916473892.mount: Deactivated successfully. May 13 08:25:59.317171 env[1260]: time="2025-05-13T08:25:59.317059090Z" level=info msg="CreateContainer within sandbox \"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27805fdd78890eb7d0f19d75d45b0ad06b77653fdedc6e050b54dee83f9ea5e3\"" May 13 08:25:59.319951 env[1260]: time="2025-05-13T08:25:59.319916983Z" level=info msg="StartContainer for \"27805fdd78890eb7d0f19d75d45b0ad06b77653fdedc6e050b54dee83f9ea5e3\"" May 13 08:25:59.413833 env[1260]: time="2025-05-13T08:25:59.413752090Z" level=info msg="StartContainer for \"27805fdd78890eb7d0f19d75d45b0ad06b77653fdedc6e050b54dee83f9ea5e3\" returns successfully" May 13 08:25:59.969000 audit[4590]: AVC avc: denied { write } for pid=4590 comm="tee" name="fd" dev="proc" ino=29725 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:25:59.972761 kernel: kauditd_printk_skb: 14 callbacks suppressed May 13 08:25:59.972842 kernel: audit: type=1400 audit(1747124759.969:308): avc: denied { write } for pid=4590 comm="tee" name="fd" dev="proc" ino=29725 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:25:59.969000 audit[4590]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe87736a00 a2=241 a3=1b6 items=1 ppid=4566 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:59.989676 kernel: audit: type=1300 audit(1747124759.969:308): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe87736a00 a2=241 a3=1b6 items=1 ppid=4566 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:59.969000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 13 08:25:59.997663 kernel: audit: type=1307 audit(1747124759.969:308): cwd="/etc/service/enabled/bird6/log" May 13 08:25:59.969000 audit: PATH item=0 name="/dev/fd/63" inode=29716 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.007887 kernel: audit: type=1302 audit(1747124759.969:308): item=0 name="/dev/fd/63" inode=29716 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:25:59.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.015677 kernel: audit: type=1327 audit(1747124759.969:308): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:25:59.992000 audit[4596]: AVC avc: denied { write } for pid=4596 comm="tee" name="fd" dev="proc" ino=29730 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.029645 kernel: audit: type=1400 audit(1747124759.992:309): avc: denied { write } for pid=4596 comm="tee" name="fd" dev="proc" ino=29730 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:25:59.992000 audit[4596]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe345cea01 a2=241 a3=1b6 items=1 ppid=4560 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.043618 kernel: audit: type=1300 audit(1747124759.992:309): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe345cea01 a2=241 a3=1b6 items=1 ppid=4560 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:25:59.992000 audit: CWD cwd="/etc/service/enabled/bird/log" May 13 08:26:00.048682 kernel: audit: type=1307 audit(1747124759.992:309): cwd="/etc/service/enabled/bird/log" May 13 08:25:59.992000 audit: PATH item=0 name="/dev/fd/63" inode=28527 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.060624 kernel: audit: type=1302 audit(1747124759.992:309): item=0 name="/dev/fd/63" inode=28527 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:25:59.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.014000 audit[4592]: AVC avc: denied { write } for pid=4592 comm="tee" name="fd" dev="proc" ino=28552 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.014000 audit[4592]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd69f749f1 a2=241 a3=1b6 items=1 ppid=4557 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.014000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 13 08:26:00.014000 audit: PATH item=0 name="/dev/fd/63" inode=28522 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.014000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.034000 audit[4611]: AVC avc: denied { write } for pid=4611 comm="tee" name="fd" dev="proc" ino=28560 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.034000 audit[4611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe867faa02 a2=241 a3=1b6 items=1 ppid=4574 pid=4611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.034000 audit: CWD cwd="/etc/service/enabled/cni/log" May 13 08:26:00.072658 kernel: audit: type=1327 audit(1747124759.992:309): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.034000 audit: PATH item=0 name="/dev/fd/63" inode=28542 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.050000 audit[4613]: AVC avc: denied { write } for pid=4613 comm="tee" name="fd" dev="proc" ino=28565 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.050000 audit[4613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4cdf9a00 a2=241 a3=1b6 items=1 ppid=4567 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.050000 audit: CWD cwd="/etc/service/enabled/confd/log" May 13 08:26:00.050000 audit: PATH item=0 name="/dev/fd/63" inode=28551 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.077000 audit[4615]: AVC avc: denied { write } for pid=4615 comm="tee" name="fd" dev="proc" ino=28570 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.077000 audit[4615]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee31d8a00 a2=241 a3=1b6 items=1 ppid=4578 pid=4615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.077000 audit: CWD cwd="/etc/service/enabled/felix/log" May 13 08:26:00.077000 audit: PATH item=0 name="/dev/fd/63" inode=29733 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.080000 audit[4622]: AVC avc: denied { write } for pid=4622 comm="tee" name="fd" dev="proc" ino=28575 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 13 08:26:00.080000 audit[4622]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe743219f0 a2=241 a3=1b6 items=1 ppid=4579 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.080000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 13 08:26:00.080000 audit: PATH item=0 name="/dev/fd/63" inode=28562 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:26:00.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 13 08:26:00.271084 kubelet[2219]: I0513 08:26:00.269942 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ljsjl" podStartSLOduration=6.269923426 podStartE2EDuration="6.269923426s" podCreationTimestamp="2025-05-13 08:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:25:59.231353977 +0000 UTC m=+90.739874456" watchObservedRunningTime="2025-05-13 08:26:00.269923426 +0000 UTC m=+91.778443905" May 13 08:26:00.276943 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.cCQ32v.mount: Deactivated successfully. May 13 08:26:00.322000 audit[4652]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:00.322000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe88f09550 a2=0 a3=7ffe88f0953c items=0 ppid=2353 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:00.334000 audit[4652]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:00.334000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe88f09550 a2=0 a3=0 items=0 ppid=2353 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit: BPF prog-id=10 op=LOAD May 13 08:26:00.617000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb5a91460 a2=98 a3=3 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.617000 audit: BPF prog-id=10 op=UNLOAD May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit: BPF prog-id=11 op=LOAD May 13 08:26:00.617000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb5a91240 a2=74 a3=540051 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.617000 audit: BPF prog-id=11 op=UNLOAD May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.617000 audit: BPF prog-id=12 op=LOAD May 13 08:26:00.617000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb5a91270 a2=94 a3=2 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.618000 audit: BPF prog-id=12 op=UNLOAD May 13 08:26:00.764877 systemd-networkd[1030]: cali0b937709769: Gained IPv6LL May 13 08:26:00.781000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.781000 audit: BPF prog-id=13 op=LOAD May 13 08:26:00.781000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb5a91130 a2=40 a3=1 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.783000 audit: BPF prog-id=13 op=UNLOAD May 13 08:26:00.783000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.783000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcb5a91200 a2=50 a3=7ffcb5a912e0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.783000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.796000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.796000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a91140 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.796000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb5a91170 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb5a91080 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a91190 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a91170 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a91160 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a91190 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.798000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.798000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb5a91170 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.798000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.799000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.799000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb5a91190 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.799000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.799000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffcb5a91160 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.799000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.799000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcb5a911d0 a2=28 a3=0 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb5a90f80 a2=50 a3=1 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit: BPF prog-id=14 op=LOAD May 13 08:26:00.801000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb5a90f80 a2=94 a3=5 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.801000 audit: BPF prog-id=14 op=UNLOAD May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffcb5a91030 a2=50 a3=1 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffcb5a91150 a2=4 a3=38 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.801000 audit[4693]: AVC avc: denied { confidentiality } for pid=4693 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:00.801000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb5a911a0 a2=94 a3=6 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { confidentiality } for pid=4693 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:00.802000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb5a90950 a2=94 a3=83 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { perfmon } for pid=4693 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { bpf } for pid=4693 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.802000 audit[4693]: AVC avc: denied { confidentiality } for pid=4693 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:00.802000 audit[4693]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffcb5a90950 a2=94 a3=83 items=0 ppid=4581 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit: BPF prog-id=15 op=LOAD May 13 08:26:00.833000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3c03d820 a2=98 a3=1999999999999999 items=0 ppid=4581 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.833000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 08:26:00.833000 audit: BPF prog-id=15 op=UNLOAD May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit: BPF prog-id=16 op=LOAD May 13 08:26:00.833000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3c03d700 a2=74 a3=ffff items=0 ppid=4581 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.833000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 08:26:00.833000 audit: BPF prog-id=16 op=UNLOAD May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { perfmon } for pid=4698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit[4698]: AVC avc: denied { bpf } for pid=4698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.833000 audit: BPF prog-id=17 op=LOAD May 13 08:26:00.833000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3c03d740 a2=40 a3=7ffe3c03d920 items=0 ppid=4581 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.833000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 13 08:26:00.833000 audit: BPF prog-id=17 op=UNLOAD May 13 08:26:00.966831 systemd-networkd[1030]: vxlan.calico: Link UP May 13 08:26:00.966846 systemd-networkd[1030]: vxlan.calico: Gained carrier May 13 08:26:00.994000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.994000 audit: BPF prog-id=18 op=LOAD May 13 08:26:00.994000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc567b6680 a2=98 a3=100 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.994000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit: BPF prog-id=18 op=UNLOAD May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit: BPF prog-id=19 op=LOAD May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc567b6490 a2=74 a3=540051 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit: BPF prog-id=19 op=UNLOAD May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit: BPF prog-id=20 op=LOAD May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc567b64c0 a2=94 a3=2 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit: BPF prog-id=20 op=UNLOAD May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b6390 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc567b63c0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc567b62d0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b63e0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b63c0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b63b0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b63e0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc567b63c0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc567b63e0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc567b63b0 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc567b6420 a2=28 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.995000 audit: BPF prog-id=21 op=LOAD May 13 08:26:00.995000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc567b6290 a2=40 a3=0 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.995000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.995000 audit: BPF prog-id=21 op=UNLOAD May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc567b6280 a2=50 a3=2800 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc567b6280 a2=50 a3=2800 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit: BPF prog-id=22 op=LOAD May 13 08:26:00.997000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc567b5aa0 a2=94 a3=2 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:00.997000 audit: BPF prog-id=22 op=UNLOAD May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { perfmon } for pid=4727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit[4727]: AVC avc: denied { bpf } for pid=4727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:00.997000 audit: BPF prog-id=23 op=LOAD May 13 08:26:00.997000 audit[4727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc567b5ba0 a2=94 a3=30 items=0 ppid=4581 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:00.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit: BPF prog-id=24 op=LOAD May 13 08:26:01.001000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff3392db0 a2=98 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.001000 audit: BPF prog-id=24 op=UNLOAD May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit: BPF prog-id=25 op=LOAD May 13 08:26:01.001000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff3392b90 a2=74 a3=540051 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.001000 audit: BPF prog-id=25 op=UNLOAD May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.001000 audit: BPF prog-id=26 op=LOAD May 13 08:26:01.001000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff3392bc0 a2=94 a3=2 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.001000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.001000 audit: BPF prog-id=26 op=UNLOAD May 13 08:26:01.128000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit: BPF prog-id=27 op=LOAD May 13 08:26:01.128000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff3392a80 a2=40 a3=1 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.128000 audit: BPF prog-id=27 op=UNLOAD May 13 08:26:01.128000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.128000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffff3392b50 a2=50 a3=7ffff3392c30 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392a90 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff3392ac0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff33929d0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392ae0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392ac0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392ab0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392ae0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.137000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.137000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff3392ac0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff3392ae0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff3392ab0 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffff3392b20 a2=28 a3=0 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffff33928d0 a2=50 a3=1 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit: BPF prog-id=28 op=LOAD May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff33928d0 a2=94 a3=5 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit: BPF prog-id=28 op=UNLOAD May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffff3392980 a2=50 a3=1 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffff3392aa0 a2=4 a3=38 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.138000 audit[4731]: AVC avc: denied { confidentiality } for pid=4731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:01.138000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffff3392af0 a2=94 a3=6 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.138000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { confidentiality } for pid=4731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffff33922a0 a2=94 a3=83 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { perfmon } for pid=4731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { confidentiality } for pid=4731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffff33922a0 a2=94 a3=83 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffff3393ce0 a2=10 a3=f0f1 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffff3393b80 a2=10 a3=3 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffff3393b20 a2=10 a3=3 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.139000 audit[4731]: AVC avc: denied { bpf } for pid=4731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 13 08:26:01.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffff3393b20 a2=10 a3=7 items=0 ppid=4581 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 13 08:26:01.145000 audit: BPF prog-id=23 op=UNLOAD May 13 08:26:01.215755 kubelet[2219]: I0513 08:26:01.215415 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fx4p8" podStartSLOduration=78.215391089 podStartE2EDuration="1m18.215391089s" podCreationTimestamp="2025-05-13 08:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:26:00.282944199 +0000 UTC m=+91.791464688" watchObservedRunningTime="2025-05-13 08:26:01.215391089 +0000 UTC m=+92.723911568" May 13 08:26:01.236000 audit[4754]: NETFILTER_CFG table=mangle:107 family=2 entries=16 op=nft_register_chain pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:01.236000 audit[4754]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffffb4f0650 a2=0 a3=7ffffb4f063c items=0 ppid=4581 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.236000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:01.247000 audit[4758]: NETFILTER_CFG table=filter:108 family=2 entries=13 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:01.247000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc689c9de0 a2=0 a3=7ffc689c9dcc items=0 ppid=2353 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:01.253000 audit[4758]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:01.253000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc689c9de0 a2=0 a3=7ffc689c9dcc items=0 ppid=2353 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:01.271000 audit[4757]: NETFILTER_CFG table=nat:110 family=2 entries=15 op=nft_register_chain pid=4757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:01.271000 audit[4757]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc1efe5820 a2=0 a3=7ffc1efe580c items=0 ppid=4581 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.271000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:01.272000 audit[4759]: NETFILTER_CFG table=filter:111 family=2 entries=69 op=nft_register_chain pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:01.272000 audit[4759]: SYSCALL arch=c000003e syscall=46 success=yes exit=36404 a0=3 a1=7ffc90a05120 a2=0 a3=7ffc90a0510c items=0 ppid=4581 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:01.281000 audit[4755]: NETFILTER_CFG table=raw:112 family=2 entries=21 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:01.281000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd6c916570 a2=0 a3=7ffd6c91655c items=0 ppid=4581 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:01.281000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:02.235045 systemd-networkd[1030]: vxlan.calico: Gained IPv6LL May 13 08:26:04.684813 env[1260]: time="2025-05-13T08:26:04.684194390Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.849 [INFO][4784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.850 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" iface="eth0" netns="/var/run/netns/cni-53d90f6d-5f85-e40b-bb55-7222d284a896" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.850 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" iface="eth0" netns="/var/run/netns/cni-53d90f6d-5f85-e40b-bb55-7222d284a896" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.853 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" iface="eth0" netns="/var/run/netns/cni-53d90f6d-5f85-e40b-bb55-7222d284a896" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.854 [INFO][4784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.854 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.919 [INFO][4791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.919 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.919 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.927 [WARNING][4791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.927 [INFO][4791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.929 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:04.932942 env[1260]: 2025-05-13 08:26:04.931 [INFO][4784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:04.938218 systemd[1]: run-netns-cni\x2d53d90f6d\x2d5f85\x2de40b\x2dbb55\x2d7222d284a896.mount: Deactivated successfully. May 13 08:26:04.940236 env[1260]: time="2025-05-13T08:26:04.939791547Z" level=info msg="TearDown network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" successfully" May 13 08:26:04.940373 env[1260]: time="2025-05-13T08:26:04.940339204Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" returns successfully" May 13 08:26:04.942005 env[1260]: time="2025-05-13T08:26:04.941944361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zls78,Uid:c3a34ed8-4b6e-4268-a42b-192aa9ef609b,Namespace:calico-system,Attempt:1,}" May 13 08:26:05.147907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:05.148070 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidc9d348b8ca: link becomes ready May 13 08:26:05.146736 systemd-networkd[1030]: calidc9d348b8ca: Link UP May 13 08:26:05.148984 systemd-networkd[1030]: calidc9d348b8ca: Gained carrier May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.032 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0 csi-node-driver- calico-system c3a34ed8-4b6e-4268-a42b-192aa9ef609b 1052 0 2025-05-13 08:24:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal csi-node-driver-zls78 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc9d348b8ca [] []}} ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.032 [INFO][4797] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.079 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" HandleID="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.091 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" HandleID="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"csi-node-driver-zls78", "timestamp":"2025-05-13 08:26:05.079893137 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.091 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.091 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.091 [INFO][4810] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.094 [INFO][4810] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.106 [INFO][4810] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.112 [INFO][4810] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.114 [INFO][4810] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.117 [INFO][4810] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.117 [INFO][4810] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.119 [INFO][4810] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3 May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.124 [INFO][4810] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.132 [INFO][4810] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.2/26] block=192.168.24.0/26 handle="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.132 [INFO][4810] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.2/26] handle="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.132 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:05.181882 env[1260]: 2025-05-13 08:26:05.132 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.2/26] IPv6=[] ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" HandleID="k8s-pod-network.fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.135 [INFO][4797] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3a34ed8-4b6e-4268-a42b-192aa9ef609b", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"csi-node-driver-zls78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc9d348b8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.135 [INFO][4797] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.2/32] ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.135 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc9d348b8ca ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.150 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.151 [INFO][4797] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3a34ed8-4b6e-4268-a42b-192aa9ef609b", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3", Pod:"csi-node-driver-zls78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc9d348b8ca", MAC:"96:60:ef:36:e1:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:05.190936 env[1260]: 2025-05-13 08:26:05.175 [INFO][4797] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3" Namespace="calico-system" Pod="csi-node-driver-zls78" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:05.194000 audit[4832]: NETFILTER_CFG table=filter:113 family=2 entries=38 op=nft_register_chain pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:05.197032 kernel: kauditd_printk_skb: 517 callbacks suppressed May 13 08:26:05.197103 kernel: audit: type=1325 audit(1747124765.194:414): table=filter:113 family=2 entries=38 op=nft_register_chain pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:05.194000 audit[4832]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffe6ed682d0 a2=0 a3=7ffe6ed682bc items=0 ppid=4581 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:05.194000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:05.211958 kernel: audit: type=1300 audit(1747124765.194:414): arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffe6ed682d0 a2=0 a3=7ffe6ed682bc items=0 ppid=4581 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:05.212097 kernel: audit: type=1327 audit(1747124765.194:414): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:05.212302 env[1260]: time="2025-05-13T08:26:05.211903970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:05.212302 env[1260]: time="2025-05-13T08:26:05.211987750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:05.212302 env[1260]: time="2025-05-13T08:26:05.212034390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:05.212641 env[1260]: time="2025-05-13T08:26:05.212298014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3 pid=4840 runtime=io.containerd.runc.v2 May 13 08:26:05.288493 env[1260]: time="2025-05-13T08:26:05.288428472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zls78,Uid:c3a34ed8-4b6e-4268-a42b-192aa9ef609b,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3\"" May 13 08:26:05.292812 env[1260]: time="2025-05-13T08:26:05.292774567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 08:26:05.677422 env[1260]: time="2025-05-13T08:26:05.677352900Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.778 [INFO][4890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.778 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" iface="eth0" netns="/var/run/netns/cni-c9c518d7-3a5b-4b99-9716-b0507f24545b" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.779 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" iface="eth0" netns="/var/run/netns/cni-c9c518d7-3a5b-4b99-9716-b0507f24545b" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.779 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" iface="eth0" netns="/var/run/netns/cni-c9c518d7-3a5b-4b99-9716-b0507f24545b" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.779 [INFO][4890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.779 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.831 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.832 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.832 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.846 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.846 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.855 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:05.861678 env[1260]: 2025-05-13 08:26:05.859 [INFO][4890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:05.861678 env[1260]: time="2025-05-13T08:26:05.861799548Z" level=info msg="TearDown network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" successfully" May 13 08:26:05.861678 env[1260]: time="2025-05-13T08:26:05.861841699Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" returns successfully" May 13 08:26:05.865127 env[1260]: time="2025-05-13T08:26:05.862809218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5nkr,Uid:9b8bef73-58a7-4997-947c-91687cbacd52,Namespace:kube-system,Attempt:1,}" May 13 08:26:05.938171 systemd[1]: run-netns-cni\x2dc9c518d7\x2d3a5b\x2d4b99\x2d9716\x2db0507f24545b.mount: Deactivated successfully. May 13 08:26:06.050667 systemd-networkd[1030]: cali6a1b2a54de2: Link UP May 13 08:26:06.062310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6a1b2a54de2: link becomes ready May 13 08:26:06.061753 systemd-networkd[1030]: cali6a1b2a54de2: Gained carrier May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.945 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0 coredns-7db6d8ff4d- kube-system 9b8bef73-58a7-4997-947c-91687cbacd52 1058 0 2025-05-13 08:24:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal coredns-7db6d8ff4d-m5nkr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a1b2a54de2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.946 [INFO][4905] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.985 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" HandleID="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.995 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" HandleID="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043c650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"coredns-7db6d8ff4d-m5nkr", "timestamp":"2025-05-13 08:26:05.98514429 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.995 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.995 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.995 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:05.997 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.002 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.008 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.010 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.013 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.013 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.015 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.027 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.040 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.3/26] block=192.168.24.0/26 handle="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.040 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.3/26] handle="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.040 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:06.106384 env[1260]: 2025-05-13 08:26:06.040 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.3/26] IPv6=[] ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" HandleID="k8s-pod-network.dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.046 [INFO][4905] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b8bef73-58a7-4997-947c-91687cbacd52", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-m5nkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1b2a54de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.046 [INFO][4905] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.3/32] ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.047 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a1b2a54de2 ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.074 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.075 [INFO][4905] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b8bef73-58a7-4997-947c-91687cbacd52", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c", Pod:"coredns-7db6d8ff4d-m5nkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1b2a54de2", MAC:"ea:06:fc:a1:04:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:06.107148 env[1260]: 2025-05-13 08:26:06.105 [INFO][4905] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m5nkr" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:06.144119 env[1260]: time="2025-05-13T08:26:06.143880517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:06.144119 env[1260]: time="2025-05-13T08:26:06.143929550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:06.144119 env[1260]: time="2025-05-13T08:26:06.143944299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:06.145769 env[1260]: time="2025-05-13T08:26:06.145711898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c pid=4939 runtime=io.containerd.runc.v2 May 13 08:26:06.263849 systemd-networkd[1030]: calidc9d348b8ca: Gained IPv6LL May 13 08:26:06.326309 env[1260]: time="2025-05-13T08:26:06.326257553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5nkr,Uid:9b8bef73-58a7-4997-947c-91687cbacd52,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c\"" May 13 08:26:06.333466 env[1260]: time="2025-05-13T08:26:06.330972497Z" level=info msg="CreateContainer within sandbox \"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 08:26:06.339000 audit[4979]: NETFILTER_CFG table=filter:114 family=2 entries=34 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:06.339000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffc25358860 a2=0 a3=7ffc2535884c items=0 ppid=4581 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:06.352755 kernel: audit: type=1325 audit(1747124766.339:415): table=filter:114 family=2 entries=34 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:06.352864 kernel: audit: type=1300 audit(1747124766.339:415): arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffc25358860 a2=0 a3=7ffc2535884c items=0 ppid=4581 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:06.352930 kernel: audit: type=1327 audit(1747124766.339:415): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:06.339000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:06.370255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785538382.mount: Deactivated successfully. May 13 08:26:06.377352 env[1260]: time="2025-05-13T08:26:06.377293444Z" level=info msg="CreateContainer within sandbox \"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a53eecd86770dc4426fec20ed6c5111778fa0fe84a53e80628855c66edcaed31\"" May 13 08:26:06.379301 env[1260]: time="2025-05-13T08:26:06.378044350Z" level=info msg="StartContainer for \"a53eecd86770dc4426fec20ed6c5111778fa0fe84a53e80628855c66edcaed31\"" May 13 08:26:06.438367 env[1260]: time="2025-05-13T08:26:06.438304675Z" level=info msg="StartContainer for \"a53eecd86770dc4426fec20ed6c5111778fa0fe84a53e80628855c66edcaed31\" returns successfully" May 13 08:26:06.678991 env[1260]: time="2025-05-13T08:26:06.678915111Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" iface="eth0" netns="/var/run/netns/cni-bb50bbd6-1fa2-6663-0b5a-55569a259bd7" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" iface="eth0" netns="/var/run/netns/cni-bb50bbd6-1fa2-6663-0b5a-55569a259bd7" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" iface="eth0" netns="/var/run/netns/cni-bb50bbd6-1fa2-6663-0b5a-55569a259bd7" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.778 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.830 [INFO][5037] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.831 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.831 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.837 [WARNING][5037] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.837 [INFO][5037] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.839 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:06.841989 env[1260]: 2025-05-13 08:26:06.840 [INFO][5029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:06.842648 env[1260]: time="2025-05-13T08:26:06.842119616Z" level=info msg="TearDown network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" successfully" May 13 08:26:06.842648 env[1260]: time="2025-05-13T08:26:06.842175132Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" returns successfully" May 13 08:26:06.922068 kubelet[2219]: I0513 08:26:06.921957 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq9sq\" (UniqueName: \"kubernetes.io/projected/ff325600-1957-4033-960b-4591b00b1eb4-kube-api-access-tq9sq\") pod \"ff325600-1957-4033-960b-4591b00b1eb4\" (UID: \"ff325600-1957-4033-960b-4591b00b1eb4\") " May 13 08:26:06.922068 kubelet[2219]: I0513 08:26:06.922039 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff325600-1957-4033-960b-4591b00b1eb4-tigera-ca-bundle\") pod \"ff325600-1957-4033-960b-4591b00b1eb4\" (UID: \"ff325600-1957-4033-960b-4591b00b1eb4\") " May 13 08:26:06.923543 kubelet[2219]: I0513 08:26:06.923378 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff325600-1957-4033-960b-4591b00b1eb4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ff325600-1957-4033-960b-4591b00b1eb4" (UID: "ff325600-1957-4033-960b-4591b00b1eb4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:26:06.928265 kubelet[2219]: I0513 08:26:06.928187 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff325600-1957-4033-960b-4591b00b1eb4-kube-api-access-tq9sq" (OuterVolumeSpecName: "kube-api-access-tq9sq") pod "ff325600-1957-4033-960b-4591b00b1eb4" (UID: "ff325600-1957-4033-960b-4591b00b1eb4"). InnerVolumeSpecName "kube-api-access-tq9sq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:26:06.938651 systemd[1]: run-netns-cni\x2dbb50bbd6\x2d1fa2\x2d6663\x2d0b5a\x2d55569a259bd7.mount: Deactivated successfully. May 13 08:26:06.938810 systemd[1]: var-lib-kubelet-pods-ff325600\x2d1957\x2d4033\x2d960b\x2d4591b00b1eb4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtq9sq.mount: Deactivated successfully. May 13 08:26:07.022988 kubelet[2219]: I0513 08:26:07.022907 2219 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tq9sq\" (UniqueName: \"kubernetes.io/projected/ff325600-1957-4033-960b-4591b00b1eb4-kube-api-access-tq9sq\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:07.022988 kubelet[2219]: I0513 08:26:07.022988 2219 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff325600-1957-4033-960b-4591b00b1eb4-tigera-ca-bundle\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:07.291418 kubelet[2219]: I0513 08:26:07.291207 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m5nkr" podStartSLOduration=84.291152475 podStartE2EDuration="1m24.291152475s" podCreationTimestamp="2025-05-13 08:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:26:07.279203047 +0000 UTC m=+98.787723526" watchObservedRunningTime="2025-05-13 08:26:07.291152475 +0000 UTC m=+98.799672954" May 13 08:26:07.332000 audit[5045]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.337667 kernel: audit: type=1325 audit(1747124767.332:416): table=filter:115 family=2 entries=10 op=nft_register_rule pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.332000 audit[5045]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe95d6f7f0 a2=0 a3=7ffe95d6f7dc items=0 ppid=2353 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.370540 kernel: audit: type=1300 audit(1747124767.332:416): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe95d6f7f0 a2=0 a3=7ffe95d6f7dc items=0 ppid=2353 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:07.379008 kernel: audit: type=1327 audit(1747124767.332:416): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:07.360000 audit[5045]: NETFILTER_CFG table=nat:116 family=2 entries=44 op=nft_register_rule pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.388663 kernel: audit: type=1325 audit(1747124767.360:417): table=nat:116 family=2 entries=44 op=nft_register_rule pid=5045 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.360000 audit[5045]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe95d6f7f0 a2=0 a3=7ffe95d6f7dc items=0 ppid=2353 pid=5045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:07.395648 kubelet[2219]: I0513 08:26:07.391634 2219 topology_manager.go:215] "Topology Admit Handler" podUID="22de07ea-a855-47d2-b57e-3234033deafc" podNamespace="calico-system" podName="calico-kube-controllers-7bd85bb678-dldxw" May 13 08:26:07.425929 kubelet[2219]: I0513 08:26:07.425794 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7s5f\" (UniqueName: \"kubernetes.io/projected/22de07ea-a855-47d2-b57e-3234033deafc-kube-api-access-x7s5f\") pod \"calico-kube-controllers-7bd85bb678-dldxw\" (UID: \"22de07ea-a855-47d2-b57e-3234033deafc\") " pod="calico-system/calico-kube-controllers-7bd85bb678-dldxw" May 13 08:26:07.425929 kubelet[2219]: I0513 08:26:07.425846 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22de07ea-a855-47d2-b57e-3234033deafc-tigera-ca-bundle\") pod \"calico-kube-controllers-7bd85bb678-dldxw\" (UID: \"22de07ea-a855-47d2-b57e-3234033deafc\") " pod="calico-system/calico-kube-controllers-7bd85bb678-dldxw" May 13 08:26:07.470000 audit[5047]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=5047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.470000 audit[5047]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe79ae3a80 a2=0 a3=7ffe79ae3a6c items=0 ppid=2353 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:07.480806 systemd-networkd[1030]: cali6a1b2a54de2: Gained IPv6LL May 13 08:26:07.502000 audit[5047]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=5047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:07.502000 audit[5047]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe79ae3a80 a2=0 a3=7ffe79ae3a6c items=0 ppid=2353 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:07.700260 env[1260]: time="2025-05-13T08:26:07.700206743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd85bb678-dldxw,Uid:22de07ea-a855-47d2-b57e-3234033deafc,Namespace:calico-system,Attempt:0,}" May 13 08:26:07.878327 env[1260]: time="2025-05-13T08:26:07.876974856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:07.883771 env[1260]: time="2025-05-13T08:26:07.883731775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:07.889917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:07.890050 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali34c526f5f0d: link becomes ready May 13 08:26:07.891877 systemd-networkd[1030]: cali34c526f5f0d: Link UP May 13 08:26:07.892110 systemd-networkd[1030]: cali34c526f5f0d: Gained carrier May 13 08:26:07.904654 env[1260]: time="2025-05-13T08:26:07.902666540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.785 [INFO][5050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0 calico-kube-controllers-7bd85bb678- calico-system 22de07ea-a855-47d2-b57e-3234033deafc 1090 0 2025-05-13 08:26:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bd85bb678 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal calico-kube-controllers-7bd85bb678-dldxw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali34c526f5f0d [] []}} ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.785 [INFO][5050] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.828 [INFO][5062] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" HandleID="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.839 [INFO][5062] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" HandleID="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029ef60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"calico-kube-controllers-7bd85bb678-dldxw", "timestamp":"2025-05-13 08:26:07.828182371 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.840 [INFO][5062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.840 [INFO][5062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.840 [INFO][5062] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.842 [INFO][5062] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.847 [INFO][5062] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.852 [INFO][5062] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.854 [INFO][5062] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.857 [INFO][5062] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.858 [INFO][5062] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.862 [INFO][5062] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3 May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.869 [INFO][5062] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.881 [INFO][5062] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.4/26] block=192.168.24.0/26 handle="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.881 [INFO][5062] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.4/26] handle="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.881 [INFO][5062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:07.922628 env[1260]: 2025-05-13 08:26:07.881 [INFO][5062] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.4/26] IPv6=[] ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" HandleID="k8s-pod-network.4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.883 [INFO][5050] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0", GenerateName:"calico-kube-controllers-7bd85bb678-", Namespace:"calico-system", SelfLink:"", UID:"22de07ea-a855-47d2-b57e-3234033deafc", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd85bb678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7bd85bb678-dldxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34c526f5f0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.884 [INFO][5050] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.4/32] ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.884 [INFO][5050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34c526f5f0d ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.890 [INFO][5050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.893 [INFO][5050] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0", GenerateName:"calico-kube-controllers-7bd85bb678-", Namespace:"calico-system", SelfLink:"", UID:"22de07ea-a855-47d2-b57e-3234033deafc", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 26, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd85bb678", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3", Pod:"calico-kube-controllers-7bd85bb678-dldxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34c526f5f0d", MAC:"5e:df:60:d1:c7:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:07.923442 env[1260]: 2025-05-13 08:26:07.916 [INFO][5050] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3" Namespace="calico-system" Pod="calico-kube-controllers-7bd85bb678-dldxw" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--7bd85bb678--dldxw-eth0" May 13 08:26:07.928000 audit[5077]: NETFILTER_CFG table=filter:119 family=2 entries=42 op=nft_register_chain pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:07.928000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffd8540d6e0 a2=0 a3=7ffd8540d6cc items=0 ppid=4581 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:07.928000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:07.956761 env[1260]: time="2025-05-13T08:26:07.956669817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:07.957916 env[1260]: time="2025-05-13T08:26:07.957871465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 08:26:07.976268 env[1260]: time="2025-05-13T08:26:07.976228555Z" level=info msg="CreateContainer within sandbox \"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 08:26:07.991866 env[1260]: time="2025-05-13T08:26:07.991728432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:07.991866 env[1260]: time="2025-05-13T08:26:07.991773247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:07.991866 env[1260]: time="2025-05-13T08:26:07.991786523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:07.992113 env[1260]: time="2025-05-13T08:26:07.991907143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3 pid=5092 runtime=io.containerd.runc.v2 May 13 08:26:08.053772 env[1260]: time="2025-05-13T08:26:08.053727030Z" level=info msg="CreateContainer within sandbox \"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"664eb0b741d737903aa0ee69deeab03f0542a4dbee7941f42aea67a06c3dce18\"" May 13 08:26:08.055690 env[1260]: time="2025-05-13T08:26:08.055270953Z" level=info msg="StartContainer for \"664eb0b741d737903aa0ee69deeab03f0542a4dbee7941f42aea67a06c3dce18\"" May 13 08:26:08.113770 env[1260]: time="2025-05-13T08:26:08.113718543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd85bb678-dldxw,Uid:22de07ea-a855-47d2-b57e-3234033deafc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3\"" May 13 08:26:08.116938 env[1260]: time="2025-05-13T08:26:08.116890079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 08:26:08.169164 env[1260]: time="2025-05-13T08:26:08.169088147Z" level=info msg="StartContainer for \"664eb0b741d737903aa0ee69deeab03f0542a4dbee7941f42aea67a06c3dce18\" returns successfully" May 13 08:26:08.690117 kubelet[2219]: I0513 08:26:08.689868 2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff325600-1957-4033-960b-4591b00b1eb4" path="/var/lib/kubelet/pods/ff325600-1957-4033-960b-4591b00b1eb4/volumes" May 13 08:26:09.336378 systemd-networkd[1030]: cali34c526f5f0d: Gained IPv6LL May 13 08:26:09.681209 env[1260]: time="2025-05-13T08:26:09.681101706Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.828 [INFO][5172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.828 [INFO][5172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="/var/run/netns/cni-6d511899-0481-324b-79cd-0f2eb23391ec" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.828 [INFO][5172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="/var/run/netns/cni-6d511899-0481-324b-79cd-0f2eb23391ec" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.829 [INFO][5172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="/var/run/netns/cni-6d511899-0481-324b-79cd-0f2eb23391ec" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.829 [INFO][5172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.829 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.861 [INFO][5180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.861 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.861 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.870 [WARNING][5180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.870 [INFO][5180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.876 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:09.878894 env[1260]: 2025-05-13 08:26:09.877 [INFO][5172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:09.882568 env[1260]: time="2025-05-13T08:26:09.882529309Z" level=info msg="TearDown network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" successfully" May 13 08:26:09.882607 systemd[1]: run-netns-cni\x2d6d511899\x2d0481\x2d324b\x2d79cd\x2d0f2eb23391ec.mount: Deactivated successfully. May 13 08:26:09.883241 env[1260]: time="2025-05-13T08:26:09.883218457Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" returns successfully" May 13 08:26:09.884453 env[1260]: time="2025-05-13T08:26:09.884417361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-q7tml,Uid:ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7,Namespace:calico-apiserver,Attempt:1,}" May 13 08:26:10.041144 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:10.041273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali33a7d3c021c: link becomes ready May 13 08:26:10.041479 systemd-networkd[1030]: cali33a7d3c021c: Link UP May 13 08:26:10.041732 systemd-networkd[1030]: cali33a7d3c021c: Gained carrier May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.947 [INFO][5187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0 calico-apiserver-769cd4b6f5- calico-apiserver ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7 1106 0 2025-05-13 08:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769cd4b6f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal calico-apiserver-769cd4b6f5-q7tml eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33a7d3c021c [] []}} ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.948 [INFO][5187] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.979 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.992 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302bd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"calico-apiserver-769cd4b6f5-q7tml", "timestamp":"2025-05-13 08:26:09.97983213 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.992 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.992 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.992 [INFO][5200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:09.995 [INFO][5200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.002 [INFO][5200] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.008 [INFO][5200] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.010 [INFO][5200] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.013 [INFO][5200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.013 [INFO][5200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.015 [INFO][5200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.021 [INFO][5200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.032 [INFO][5200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.5/26] block=192.168.24.0/26 handle="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.032 [INFO][5200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.5/26] handle="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.032 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:10.077075 env[1260]: 2025-05-13 08:26:10.032 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.5/26] IPv6=[] ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.033 [INFO][5187] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"calico-apiserver-769cd4b6f5-q7tml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a7d3c021c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.034 [INFO][5187] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.5/32] ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.034 [INFO][5187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33a7d3c021c ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.042 [INFO][5187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.044 [INFO][5187] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe", Pod:"calico-apiserver-769cd4b6f5-q7tml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a7d3c021c", MAC:"22:ae:88:b0:09:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:10.080046 env[1260]: 2025-05-13 08:26:10.067 [INFO][5187] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-q7tml" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:10.083000 audit[5214]: NETFILTER_CFG table=filter:120 family=2 entries=62 op=nft_register_chain pid=5214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:10.083000 audit[5214]: SYSCALL arch=c000003e syscall=46 success=yes exit=31096 a0=3 a1=7ffcb673ead0 a2=0 a3=7ffcb673eabc items=0 ppid=4581 pid=5214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:10.083000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:10.106339 env[1260]: time="2025-05-13T08:26:10.106243947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:10.106537 env[1260]: time="2025-05-13T08:26:10.106291308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:10.106537 env[1260]: time="2025-05-13T08:26:10.106309042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:10.106537 env[1260]: time="2025-05-13T08:26:10.106464369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe pid=5228 runtime=io.containerd.runc.v2 May 13 08:26:10.147866 systemd[1]: run-containerd-runc-k8s.io-62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe-runc.LXBRh5.mount: Deactivated successfully. May 13 08:26:10.245092 env[1260]: time="2025-05-13T08:26:10.245040898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-q7tml,Uid:ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\"" May 13 08:26:10.690322 env[1260]: time="2025-05-13T08:26:10.690187693Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.017 [INFO][5277] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.018 [INFO][5277] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="/var/run/netns/cni-c1765cb7-f7f7-c222-c86c-b7005afcc8b1" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.019 [INFO][5277] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="/var/run/netns/cni-c1765cb7-f7f7-c222-c86c-b7005afcc8b1" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.025 [INFO][5277] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="/var/run/netns/cni-c1765cb7-f7f7-c222-c86c-b7005afcc8b1" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.025 [INFO][5277] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.025 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.064 [INFO][5284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.064 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.064 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.086 [WARNING][5284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.086 [INFO][5284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.088 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:11.092215 env[1260]: 2025-05-13 08:26:11.090 [INFO][5277] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:11.097671 systemd[1]: run-netns-cni\x2dc1765cb7\x2df7f7\x2dc222\x2dc86c\x2db7005afcc8b1.mount: Deactivated successfully. May 13 08:26:11.100050 env[1260]: time="2025-05-13T08:26:11.099989089Z" level=info msg="TearDown network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" successfully" May 13 08:26:11.100185 env[1260]: time="2025-05-13T08:26:11.100156981Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" returns successfully" May 13 08:26:11.101111 env[1260]: time="2025-05-13T08:26:11.101084706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-h8qgj,Uid:be3ad376-6a34-448c-8bd7-d065d8e46df2,Namespace:calico-apiserver,Attempt:1,}" May 13 08:26:11.256007 systemd-networkd[1030]: cali33a7d3c021c: Gained IPv6LL May 13 08:26:11.366888 systemd-networkd[1030]: cali602f5b1f3e7: Link UP May 13 08:26:11.370867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:11.371143 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali602f5b1f3e7: link becomes ready May 13 08:26:11.371110 systemd-networkd[1030]: cali602f5b1f3e7: Gained carrier May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.197 [INFO][5290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0 calico-apiserver-769cd4b6f5- calico-apiserver be3ad376-6a34-448c-8bd7-d065d8e46df2 1115 0 2025-05-13 08:24:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:769cd4b6f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal calico-apiserver-769cd4b6f5-h8qgj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali602f5b1f3e7 [] []}} ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.198 [INFO][5290] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.265 [INFO][5303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.297 [INFO][5303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"calico-apiserver-769cd4b6f5-h8qgj", "timestamp":"2025-05-13 08:26:11.265026211 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.297 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.297 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.297 [INFO][5303] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.300 [INFO][5303] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.308 [INFO][5303] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.316 [INFO][5303] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.320 [INFO][5303] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.323 [INFO][5303] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.323 [INFO][5303] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.327 [INFO][5303] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980 May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.334 [INFO][5303] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.349 [INFO][5303] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.6/26] block=192.168.24.0/26 handle="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.350 [INFO][5303] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.6/26] handle="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.350 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:11.393923 env[1260]: 2025-05-13 08:26:11.350 [INFO][5303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.6/26] IPv6=[] ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.355 [INFO][5290] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3ad376-6a34-448c-8bd7-d065d8e46df2", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"calico-apiserver-769cd4b6f5-h8qgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali602f5b1f3e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.356 [INFO][5290] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.6/32] ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.356 [INFO][5290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali602f5b1f3e7 ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.371 [INFO][5290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.372 [INFO][5290] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3ad376-6a34-448c-8bd7-d065d8e46df2", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980", Pod:"calico-apiserver-769cd4b6f5-h8qgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali602f5b1f3e7", MAC:"06:f6:18:64:c1:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:11.395258 env[1260]: 2025-05-13 08:26:11.392 [INFO][5290] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Namespace="calico-apiserver" Pod="calico-apiserver-769cd4b6f5-h8qgj" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:11.447616 env[1260]: time="2025-05-13T08:26:11.441910485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:11.447616 env[1260]: time="2025-05-13T08:26:11.441944580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:11.447616 env[1260]: time="2025-05-13T08:26:11.441957845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:11.447616 env[1260]: time="2025-05-13T08:26:11.442070260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980 pid=5335 runtime=io.containerd.runc.v2 May 13 08:26:11.515999 systemd[1]: run-containerd-runc-k8s.io-af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980-runc.SvllJu.mount: Deactivated successfully. May 13 08:26:11.526673 kernel: kauditd_printk_skb: 14 callbacks suppressed May 13 08:26:11.526826 kernel: audit: type=1325 audit(1747124771.519:422): table=filter:121 family=2 entries=52 op=nft_register_chain pid=5336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:11.519000 audit[5336]: NETFILTER_CFG table=filter:121 family=2 entries=52 op=nft_register_chain pid=5336 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:11.519000 audit[5336]: SYSCALL arch=c000003e syscall=46 success=yes exit=26728 a0=3 a1=7ffd0961d200 a2=0 a3=7ffd0961d1ec items=0 ppid=4581 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:11.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:11.545640 kernel: audit: type=1300 audit(1747124771.519:422): arch=c000003e syscall=46 success=yes exit=26728 a0=3 a1=7ffd0961d200 a2=0 a3=7ffd0961d1ec items=0 ppid=4581 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:11.545742 kernel: audit: type=1327 audit(1747124771.519:422): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:11.703648 env[1260]: time="2025-05-13T08:26:11.703593818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-769cd4b6f5-h8qgj,Uid:be3ad376-6a34-448c-8bd7-d065d8e46df2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\"" May 13 08:26:12.641702 env[1260]: time="2025-05-13T08:26:12.641261314Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:12.645190 env[1260]: time="2025-05-13T08:26:12.645123838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:12.648698 env[1260]: time="2025-05-13T08:26:12.648632914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:12.651766 env[1260]: time="2025-05-13T08:26:12.651698422Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:12.653473 env[1260]: time="2025-05-13T08:26:12.652641848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 08:26:12.664348 env[1260]: time="2025-05-13T08:26:12.663912421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 08:26:12.702638 env[1260]: time="2025-05-13T08:26:12.693928664Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:26:12.707093 env[1260]: time="2025-05-13T08:26:12.707010273Z" level=info msg="CreateContainer within sandbox \"4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 08:26:12.739345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461000495.mount: Deactivated successfully. May 13 08:26:12.753014 env[1260]: time="2025-05-13T08:26:12.752929666Z" level=info msg="CreateContainer within sandbox \"4f9e8145240916189449b0123d7213541101090132d382a735d77b3b1272eaa3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07\"" May 13 08:26:12.755860 env[1260]: time="2025-05-13T08:26:12.755812665Z" level=info msg="StartContainer for \"7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07\"" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.852 [INFO][5388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.854 [INFO][5388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" iface="eth0" netns="/var/run/netns/cni-9e516c1e-3522-0497-ca3c-248ea19137dc" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.854 [INFO][5388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" iface="eth0" netns="/var/run/netns/cni-9e516c1e-3522-0497-ca3c-248ea19137dc" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.855 [INFO][5388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" iface="eth0" netns="/var/run/netns/cni-9e516c1e-3522-0497-ca3c-248ea19137dc" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.855 [INFO][5388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.855 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.905 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.905 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.906 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.931 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.931 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.933 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:12.940374 env[1260]: 2025-05-13 08:26:12.935 [INFO][5388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:12.940374 env[1260]: time="2025-05-13T08:26:12.937980208Z" level=info msg="TearDown network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" successfully" May 13 08:26:12.940374 env[1260]: time="2025-05-13T08:26:12.938035774Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" returns successfully" May 13 08:26:12.943264 env[1260]: time="2025-05-13T08:26:12.943229626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-wdj2v,Uid:00446d97-a96a-4df2-93a8-5f3d59494b3b,Namespace:calico-apiserver,Attempt:1,}" May 13 08:26:12.988555 env[1260]: time="2025-05-13T08:26:12.988486130Z" level=info msg="StartContainer for \"7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07\" returns successfully" May 13 08:26:13.053112 systemd-networkd[1030]: cali602f5b1f3e7: Gained IPv6LL May 13 08:26:13.178115 systemd-networkd[1030]: calif086f8d41c5: Link UP May 13 08:26:13.185276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:13.186178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif086f8d41c5: link becomes ready May 13 08:26:13.185890 systemd-networkd[1030]: calif086f8d41c5: Gained carrier May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.065 [INFO][5436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0 calico-apiserver-6644f7cd55- calico-apiserver 00446d97-a96a-4df2-93a8-5f3d59494b3b 1127 0 2025-05-13 08:24:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6644f7cd55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal calico-apiserver-6644f7cd55-wdj2v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif086f8d41c5 [] []}} ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.068 [INFO][5436] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.120 [INFO][5450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" HandleID="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.130 [INFO][5450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" HandleID="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"calico-apiserver-6644f7cd55-wdj2v", "timestamp":"2025-05-13 08:26:13.120115002 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.131 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.131 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.131 [INFO][5450] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.133 [INFO][5450] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.138 [INFO][5450] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.143 [INFO][5450] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.146 [INFO][5450] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.149 [INFO][5450] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.149 [INFO][5450] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.151 [INFO][5450] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.156 [INFO][5450] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.171 [INFO][5450] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.7/26] block=192.168.24.0/26 handle="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.171 [INFO][5450] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.7/26] handle="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.171 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:13.213631 env[1260]: 2025-05-13 08:26:13.171 [INFO][5450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.7/26] IPv6=[] ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" HandleID="k8s-pod-network.ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.174 [INFO][5436] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"00446d97-a96a-4df2-93a8-5f3d59494b3b", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"calico-apiserver-6644f7cd55-wdj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif086f8d41c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.174 [INFO][5436] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.7/32] ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.174 [INFO][5436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif086f8d41c5 ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.187 [INFO][5436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.187 [INFO][5436] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"00446d97-a96a-4df2-93a8-5f3d59494b3b", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb", Pod:"calico-apiserver-6644f7cd55-wdj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif086f8d41c5", MAC:"3a:d7:14:e7:d7:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:13.215004 env[1260]: 2025-05-13 08:26:13.205 [INFO][5436] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-wdj2v" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:13.265895 env[1260]: time="2025-05-13T08:26:13.259815621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:13.265895 env[1260]: time="2025-05-13T08:26:13.259911695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:13.265895 env[1260]: time="2025-05-13T08:26:13.259926152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:13.265895 env[1260]: time="2025-05-13T08:26:13.260160030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb pid=5476 runtime=io.containerd.runc.v2 May 13 08:26:13.272000 audit[5492]: NETFILTER_CFG table=filter:122 family=2 entries=46 op=nft_register_chain pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:13.278604 kernel: audit: type=1325 audit(1747124773.272:423): table=filter:122 family=2 entries=46 op=nft_register_chain pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:13.272000 audit[5492]: SYSCALL arch=c000003e syscall=46 success=yes exit=23860 a0=3 a1=7fffeaf2a120 a2=0 a3=7fffeaf2a10c items=0 ppid=4581 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:13.286736 kernel: audit: type=1300 audit(1747124773.272:423): arch=c000003e syscall=46 success=yes exit=23860 a0=3 a1=7fffeaf2a120 a2=0 a3=7fffeaf2a10c items=0 ppid=4581 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:13.272000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:13.301699 kernel: audit: type=1327 audit(1747124773.272:423): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:13.416798 env[1260]: time="2025-05-13T08:26:13.416736506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-wdj2v,Uid:00446d97-a96a-4df2-93a8-5f3d59494b3b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb\"" May 13 08:26:13.690398 systemd[1]: run-netns-cni\x2d9e516c1e\x2d3522\x2d0497\x2dca3c\x2d248ea19137dc.mount: Deactivated successfully. May 13 08:26:13.800226 kubelet[2219]: I0513 08:26:13.800026 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bd85bb678-dldxw" podStartSLOduration=2.257262715 podStartE2EDuration="6.79995683s" podCreationTimestamp="2025-05-13 08:26:07 +0000 UTC" firstStartedPulling="2025-05-13 08:26:08.116134805 +0000 UTC m=+99.624655294" lastFinishedPulling="2025-05-13 08:26:12.65882892 +0000 UTC m=+104.167349409" observedRunningTime="2025-05-13 08:26:13.350109123 +0000 UTC m=+104.858629632" watchObservedRunningTime="2025-05-13 08:26:13.79995683 +0000 UTC m=+105.308477359" May 13 08:26:14.393113 systemd-networkd[1030]: calif086f8d41c5: Gained IPv6LL May 13 08:26:15.745919 env[1260]: time="2025-05-13T08:26:15.745770736Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:15.749303 env[1260]: time="2025-05-13T08:26:15.749235803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:15.751895 env[1260]: time="2025-05-13T08:26:15.751835521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:15.754144 env[1260]: time="2025-05-13T08:26:15.754083196Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:15.754957 env[1260]: time="2025-05-13T08:26:15.754884941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 08:26:15.758366 env[1260]: time="2025-05-13T08:26:15.757720532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 08:26:15.766651 env[1260]: time="2025-05-13T08:26:15.766543900Z" level=info msg="CreateContainer within sandbox \"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 08:26:15.794480 env[1260]: time="2025-05-13T08:26:15.794326317Z" level=info msg="CreateContainer within sandbox \"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d7301938d1a391ea755f71f305d3f6c9f4ad5f7ec3a7a49e3d14408081fee19a\"" May 13 08:26:15.795835 env[1260]: time="2025-05-13T08:26:15.795677906Z" level=info msg="StartContainer for \"d7301938d1a391ea755f71f305d3f6c9f4ad5f7ec3a7a49e3d14408081fee19a\"" May 13 08:26:15.935682 env[1260]: time="2025-05-13T08:26:15.935623892Z" level=info msg="StartContainer for \"d7301938d1a391ea755f71f305d3f6c9f4ad5f7ec3a7a49e3d14408081fee19a\" returns successfully" May 13 08:26:16.409930 kubelet[2219]: I0513 08:26:16.409678 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zls78" podStartSLOduration=73.944220097 podStartE2EDuration="1m24.409387873s" podCreationTimestamp="2025-05-13 08:24:52 +0000 UTC" firstStartedPulling="2025-05-13 08:26:05.292026979 +0000 UTC m=+96.800547458" lastFinishedPulling="2025-05-13 08:26:15.757194705 +0000 UTC m=+107.265715234" observedRunningTime="2025-05-13 08:26:16.406155782 +0000 UTC m=+107.914676311" watchObservedRunningTime="2025-05-13 08:26:16.409387873 +0000 UTC m=+107.917908403" May 13 08:26:16.842264 kubelet[2219]: I0513 08:26:16.842018 2219 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 08:26:16.842932 kubelet[2219]: I0513 08:26:16.842895 2219 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 08:26:20.898518 env[1260]: time="2025-05-13T08:26:20.895616820Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:20.910374 env[1260]: time="2025-05-13T08:26:20.906564704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:20.910871 env[1260]: time="2025-05-13T08:26:20.910783350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:20.915168 env[1260]: time="2025-05-13T08:26:20.914991456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 08:26:20.916860 env[1260]: time="2025-05-13T08:26:20.913328769Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:20.927709 env[1260]: time="2025-05-13T08:26:20.926012119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 08:26:20.939646 env[1260]: time="2025-05-13T08:26:20.939507476Z" level=info msg="CreateContainer within sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 08:26:20.980978 env[1260]: time="2025-05-13T08:26:20.980504735Z" level=info msg="CreateContainer within sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\"" May 13 08:26:20.989917 env[1260]: time="2025-05-13T08:26:20.987285742Z" level=info msg="StartContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\"" May 13 08:26:21.069768 systemd[1]: run-containerd-runc-k8s.io-8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59-runc.d2qoLP.mount: Deactivated successfully. May 13 08:26:21.180077 env[1260]: time="2025-05-13T08:26:21.177625266Z" level=info msg="StartContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" returns successfully" May 13 08:26:21.417816 kubelet[2219]: I0513 08:26:21.416987 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-769cd4b6f5-q7tml" podStartSLOduration=78.742347998 podStartE2EDuration="1m29.416957262s" podCreationTimestamp="2025-05-13 08:24:52 +0000 UTC" firstStartedPulling="2025-05-13 08:26:10.247409721 +0000 UTC m=+101.755930200" lastFinishedPulling="2025-05-13 08:26:20.922018935 +0000 UTC m=+112.430539464" observedRunningTime="2025-05-13 08:26:21.413991257 +0000 UTC m=+112.922511756" watchObservedRunningTime="2025-05-13 08:26:21.416957262 +0000 UTC m=+112.925477741" May 13 08:26:21.444933 env[1260]: time="2025-05-13T08:26:21.444813573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:21.447776 env[1260]: time="2025-05-13T08:26:21.447749631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:21.449879 env[1260]: time="2025-05-13T08:26:21.449853615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:21.454213 env[1260]: time="2025-05-13T08:26:21.454098452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:21.455189 env[1260]: time="2025-05-13T08:26:21.454400652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 08:26:21.459826 env[1260]: time="2025-05-13T08:26:21.459776416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 08:26:21.463859 env[1260]: time="2025-05-13T08:26:21.460923325Z" level=info msg="CreateContainer within sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 08:26:21.479000 audit[5614]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:21.486731 kernel: audit: type=1325 audit(1747124781.479:424): table=filter:123 family=2 entries=10 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:21.479000 audit[5614]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdde8cb800 a2=0 a3=7ffdde8cb7ec items=0 ppid=2353 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:21.495404 kernel: audit: type=1300 audit(1747124781.479:424): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdde8cb800 a2=0 a3=7ffdde8cb7ec items=0 ppid=2353 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:21.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:21.500326 env[1260]: time="2025-05-13T08:26:21.498815474Z" level=info msg="CreateContainer within sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\"" May 13 08:26:21.500592 kernel: audit: type=1327 audit(1747124781.479:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:21.497000 audit[5614]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:21.504275 env[1260]: time="2025-05-13T08:26:21.501180007Z" level=info msg="StartContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\"" May 13 08:26:21.511863 kernel: audit: type=1325 audit(1747124781.497:425): table=nat:124 family=2 entries=20 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:21.511926 kernel: audit: type=1300 audit(1747124781.497:425): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdde8cb800 a2=0 a3=7ffdde8cb7ec items=0 ppid=2353 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:21.497000 audit[5614]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdde8cb800 a2=0 a3=7ffdde8cb7ec items=0 ppid=2353 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:21.515817 kernel: audit: type=1327 audit(1747124781.497:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:21.497000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:21.654974 env[1260]: time="2025-05-13T08:26:21.654930054Z" level=info msg="StartContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" returns successfully" May 13 08:26:22.018603 env[1260]: time="2025-05-13T08:26:22.018537219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:22.020847 env[1260]: time="2025-05-13T08:26:22.020820767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:22.023161 env[1260]: time="2025-05-13T08:26:22.023137309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:22.025078 env[1260]: time="2025-05-13T08:26:22.025052490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:26:22.025605 env[1260]: time="2025-05-13T08:26:22.025560474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 08:26:22.028370 env[1260]: time="2025-05-13T08:26:22.028338981Z" level=info msg="CreateContainer within sandbox \"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 08:26:22.050906 env[1260]: time="2025-05-13T08:26:22.050840533Z" level=info msg="CreateContainer within sandbox \"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ec7cb8a4af954273daa5d0a31ca3759f08f73b6d2dd4f91f6ab181f35bc6fd8\"" May 13 08:26:22.051934 env[1260]: time="2025-05-13T08:26:22.051902438Z" level=info msg="StartContainer for \"6ec7cb8a4af954273daa5d0a31ca3759f08f73b6d2dd4f91f6ab181f35bc6fd8\"" May 13 08:26:22.297393 env[1260]: time="2025-05-13T08:26:22.297256316Z" level=info msg="StartContainer for \"6ec7cb8a4af954273daa5d0a31ca3759f08f73b6d2dd4f91f6ab181f35bc6fd8\" returns successfully" May 13 08:26:22.449536 kubelet[2219]: I0513 08:26:22.449456 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-769cd4b6f5-h8qgj" podStartSLOduration=80.698311686 podStartE2EDuration="1m30.449414202s" podCreationTimestamp="2025-05-13 08:24:52 +0000 UTC" firstStartedPulling="2025-05-13 08:26:11.705224909 +0000 UTC m=+103.213745388" lastFinishedPulling="2025-05-13 08:26:21.456327415 +0000 UTC m=+112.964847904" observedRunningTime="2025-05-13 08:26:22.427348918 +0000 UTC m=+113.935869417" watchObservedRunningTime="2025-05-13 08:26:22.449414202 +0000 UTC m=+113.957934681" May 13 08:26:22.465000 audit[5686]: NETFILTER_CFG table=filter:125 family=2 entries=10 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.471661 kernel: audit: type=1325 audit(1747124782.465:426): table=filter:125 family=2 entries=10 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.465000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff28bb4940 a2=0 a3=7fff28bb492c items=0 ppid=2353 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:22.482600 kernel: audit: type=1300 audit(1747124782.465:426): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff28bb4940 a2=0 a3=7fff28bb492c items=0 ppid=2353 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:22.465000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:22.487603 kernel: audit: type=1327 audit(1747124782.465:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:22.491000 audit[5686]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.497599 kernel: audit: type=1325 audit(1747124782.491:427): table=nat:126 family=2 entries=20 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.491000 audit[5686]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff28bb4940 a2=0 a3=7fff28bb492c items=0 ppid=2353 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:22.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:22.519000 audit[5688]: NETFILTER_CFG table=filter:127 family=2 entries=10 op=nft_register_rule pid=5688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.519000 audit[5688]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc1dddfd80 a2=0 a3=7ffc1dddfd6c items=0 ppid=2353 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:22.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:22.525000 audit[5688]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:22.525000 audit[5688]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1dddfd80 a2=0 a3=7ffc1dddfd6c items=0 ppid=2353 pid=5688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:22.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:22.969024 systemd[1]: run-containerd-runc-k8s.io-6ec7cb8a4af954273daa5d0a31ca3759f08f73b6d2dd4f91f6ab181f35bc6fd8-runc.FwkwyO.mount: Deactivated successfully. May 13 08:26:23.189120 kubelet[2219]: I0513 08:26:23.189040 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6644f7cd55-wdj2v" podStartSLOduration=81.938967558 podStartE2EDuration="1m30.188993971s" podCreationTimestamp="2025-05-13 08:24:53 +0000 UTC" firstStartedPulling="2025-05-13 08:26:13.77651907 +0000 UTC m=+105.285039599" lastFinishedPulling="2025-05-13 08:26:22.026545533 +0000 UTC m=+113.535066012" observedRunningTime="2025-05-13 08:26:22.453320532 +0000 UTC m=+113.961841022" watchObservedRunningTime="2025-05-13 08:26:23.188993971 +0000 UTC m=+114.697514450" May 13 08:26:23.625000 audit[5697]: NETFILTER_CFG table=filter:129 family=2 entries=9 op=nft_register_rule pid=5697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:23.625000 audit[5697]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe090c82d0 a2=0 a3=7ffe090c82bc items=0 ppid=2353 pid=5697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:23.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:23.631000 audit[5697]: NETFILTER_CFG table=nat:130 family=2 entries=27 op=nft_register_chain pid=5697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:23.631000 audit[5697]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe090c82d0 a2=0 a3=7ffe090c82bc items=0 ppid=2353 pid=5697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:23.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:24.487000 audit[5700]: NETFILTER_CFG table=filter:131 family=2 entries=8 op=nft_register_rule pid=5700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:24.487000 audit[5700]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff730e3760 a2=0 a3=7fff730e374c items=0 ppid=2353 pid=5700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:24.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:24.544000 audit[5700]: NETFILTER_CFG table=nat:132 family=2 entries=38 op=nft_register_chain pid=5700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:24.544000 audit[5700]: SYSCALL arch=c000003e syscall=46 success=yes exit=13124 a0=3 a1=7fff730e3760 a2=0 a3=7fff730e374c items=0 ppid=2353 pid=5700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:24.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:24.562502 kubelet[2219]: I0513 08:26:24.562399 2219 topology_manager.go:215] "Topology Admit Handler" podUID="5198f365-2f3d-4184-8185-f65c6c9445da" podNamespace="calico-apiserver" podName="calico-apiserver-6644f7cd55-tpq9g" May 13 08:26:24.577222 kubelet[2219]: I0513 08:26:24.577129 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5198f365-2f3d-4184-8185-f65c6c9445da-calico-apiserver-certs\") pod \"calico-apiserver-6644f7cd55-tpq9g\" (UID: \"5198f365-2f3d-4184-8185-f65c6c9445da\") " pod="calico-apiserver/calico-apiserver-6644f7cd55-tpq9g" May 13 08:26:24.577222 kubelet[2219]: I0513 08:26:24.577214 2219 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhzf\" (UniqueName: \"kubernetes.io/projected/5198f365-2f3d-4184-8185-f65c6c9445da-kube-api-access-bfhzf\") pod \"calico-apiserver-6644f7cd55-tpq9g\" (UID: \"5198f365-2f3d-4184-8185-f65c6c9445da\") " pod="calico-apiserver/calico-apiserver-6644f7cd55-tpq9g" May 13 08:26:24.778124 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.nPvtAe.mount: Deactivated successfully. May 13 08:26:24.868032 env[1260]: time="2025-05-13T08:26:24.867980679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-tpq9g,Uid:5198f365-2f3d-4184-8185-f65c6c9445da,Namespace:calico-apiserver,Attempt:0,}" May 13 08:26:25.144856 systemd-networkd[1030]: calib14abaae3a4: Link UP May 13 08:26:25.147712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:26:25.147769 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib14abaae3a4: link becomes ready May 13 08:26:25.148062 systemd-networkd[1030]: calib14abaae3a4: Gained carrier May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.025 [INFO][5725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0 calico-apiserver-6644f7cd55- calico-apiserver 5198f365-2f3d-4184-8185-f65c6c9445da 1223 0 2025-05-13 08:26:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6644f7cd55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-n-f896a7891b.novalocal calico-apiserver-6644f7cd55-tpq9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib14abaae3a4 [] []}} ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.027 [INFO][5725] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.072 [INFO][5738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" HandleID="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.087 [INFO][5738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" HandleID="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ace70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-n-f896a7891b.novalocal", "pod":"calico-apiserver-6644f7cd55-tpq9g", "timestamp":"2025-05-13 08:26:25.072966075 +0000 UTC"}, Hostname:"ci-3510-3-7-n-f896a7891b.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.087 [INFO][5738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.087 [INFO][5738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.087 [INFO][5738] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-n-f896a7891b.novalocal' May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.090 [INFO][5738] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.095 [INFO][5738] ipam/ipam.go 372: Looking up existing affinities for host host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.100 [INFO][5738] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.102 [INFO][5738] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.106 [INFO][5738] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.106 [INFO][5738] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.108 [INFO][5738] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.119 [INFO][5738] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.130 [INFO][5738] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.8/26] block=192.168.24.0/26 handle="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.130 [INFO][5738] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.8/26] handle="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" host="ci-3510-3-7-n-f896a7891b.novalocal" May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.130 [INFO][5738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:25.166980 env[1260]: 2025-05-13 08:26:25.130 [INFO][5738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.8/26] IPv6=[] ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" HandleID="k8s-pod-network.d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.133 [INFO][5725] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"5198f365-2f3d-4184-8185-f65c6c9445da", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"", Pod:"calico-apiserver-6644f7cd55-tpq9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib14abaae3a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.133 [INFO][5725] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.8/32] ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.133 [INFO][5725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib14abaae3a4 ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.149 [INFO][5725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.149 [INFO][5725] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"5198f365-2f3d-4184-8185-f65c6c9445da", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 26, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec", Pod:"calico-apiserver-6644f7cd55-tpq9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib14abaae3a4", MAC:"5a:8c:94:4a:7b:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:25.168863 env[1260]: 2025-05-13 08:26:25.165 [INFO][5725] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec" Namespace="calico-apiserver" Pod="calico-apiserver-6644f7cd55-tpq9g" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--tpq9g-eth0" May 13 08:26:25.184000 audit[5752]: NETFILTER_CFG table=filter:133 family=2 entries=50 op=nft_register_chain pid=5752 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:25.184000 audit[5752]: SYSCALL arch=c000003e syscall=46 success=yes exit=25048 a0=3 a1=7ffd4dc9f690 a2=0 a3=7ffd4dc9f67c items=0 ppid=4581 pid=5752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:25.184000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:25.241672 env[1260]: time="2025-05-13T08:26:25.241413343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:26:25.241672 env[1260]: time="2025-05-13T08:26:25.241509858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:26:25.241672 env[1260]: time="2025-05-13T08:26:25.241525177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:26:25.242158 env[1260]: time="2025-05-13T08:26:25.242095421Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec pid=5767 runtime=io.containerd.runc.v2 May 13 08:26:25.351293 env[1260]: time="2025-05-13T08:26:25.351239473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f7cd55-tpq9g,Uid:5198f365-2f3d-4184-8185-f65c6c9445da,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec\"" May 13 08:26:25.357903 env[1260]: time="2025-05-13T08:26:25.357846526Z" level=info msg="CreateContainer within sandbox \"d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 08:26:25.382891 env[1260]: time="2025-05-13T08:26:25.382838061Z" level=info msg="CreateContainer within sandbox \"d82310e58e64b82a737da876e98471ec9393fa7ae97786470423526b9c4a81ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25b1a9d18bcf05cec47ff2067a83320f32495413c908493037ffe1e5b6d3d05f\"" May 13 08:26:25.384564 env[1260]: time="2025-05-13T08:26:25.384536248Z" level=info msg="StartContainer for \"25b1a9d18bcf05cec47ff2067a83320f32495413c908493037ffe1e5b6d3d05f\"" May 13 08:26:25.431526 env[1260]: time="2025-05-13T08:26:25.431404352Z" level=info msg="StopContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" with timeout 30 (s)" May 13 08:26:25.432400 env[1260]: time="2025-05-13T08:26:25.432372981Z" level=info msg="Stop container \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" with signal terminated" May 13 08:26:25.564000 audit[5853]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_register_rule pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:25.564000 audit[5853]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd7a8c8780 a2=0 a3=7ffd7a8c876c items=0 ppid=2353 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:25.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:25.570000 audit[5853]: NETFILTER_CFG table=nat:135 family=2 entries=40 op=nft_unregister_chain pid=5853 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:25.570000 audit[5853]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffd7a8c8780 a2=0 a3=7ffd7a8c876c items=0 ppid=2353 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:25.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:25.696521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59-rootfs.mount: Deactivated successfully. May 13 08:26:26.073159 env[1260]: time="2025-05-13T08:26:26.072958574Z" level=info msg="StartContainer for \"25b1a9d18bcf05cec47ff2067a83320f32495413c908493037ffe1e5b6d3d05f\" returns successfully" May 13 08:26:26.078983 env[1260]: time="2025-05-13T08:26:26.078893038Z" level=info msg="shim disconnected" id=8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59 May 13 08:26:26.079252 env[1260]: time="2025-05-13T08:26:26.079215457Z" level=warning msg="cleaning up after shim disconnected" id=8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59 namespace=k8s.io May 13 08:26:26.079488 env[1260]: time="2025-05-13T08:26:26.079454986Z" level=info msg="cleaning up dead shim" May 13 08:26:26.092170 env[1260]: time="2025-05-13T08:26:26.092115702Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:26:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5860 runtime=io.containerd.runc.v2\n" May 13 08:26:26.096793 env[1260]: time="2025-05-13T08:26:26.096755683Z" level=info msg="StopContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" returns successfully" May 13 08:26:26.097446 env[1260]: time="2025-05-13T08:26:26.097419537Z" level=info msg="StopPodSandbox for \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\"" May 13 08:26:26.097635 env[1260]: time="2025-05-13T08:26:26.097609331Z" level=info msg="Container to stop \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:26:26.104366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe-shm.mount: Deactivated successfully. May 13 08:26:26.150288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe-rootfs.mount: Deactivated successfully. May 13 08:26:26.159810 env[1260]: time="2025-05-13T08:26:26.159760865Z" level=info msg="shim disconnected" id=62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe May 13 08:26:26.160083 env[1260]: time="2025-05-13T08:26:26.160062444Z" level=warning msg="cleaning up after shim disconnected" id=62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe namespace=k8s.io May 13 08:26:26.160191 env[1260]: time="2025-05-13T08:26:26.160173757Z" level=info msg="cleaning up dead shim" May 13 08:26:26.170925 env[1260]: time="2025-05-13T08:26:26.170880575Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:26:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5895 runtime=io.containerd.runc.v2\n" May 13 08:26:26.234425 systemd-networkd[1030]: cali33a7d3c021c: Link DOWN May 13 08:26:26.234433 systemd-networkd[1030]: cali33a7d3c021c: Lost carrier May 13 08:26:26.290000 audit[5938]: NETFILTER_CFG table=filter:136 family=2 entries=56 op=nft_register_rule pid=5938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:26.290000 audit[5938]: SYSCALL arch=c000003e syscall=46 success=yes exit=9064 a0=3 a1=7ffcf4efb2a0 a2=0 a3=7ffcf4efb28c items=0 ppid=4581 pid=5938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.290000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:26.293000 audit[5938]: NETFILTER_CFG table=filter:137 family=2 entries=4 op=nft_unregister_chain pid=5938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:26.293000 audit[5938]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffcf4efb2a0 a2=0 a3=6f72662d696c6163 items=0 ppid=4581 pid=5938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.293000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.231 [INFO][5923] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.233 [INFO][5923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" iface="eth0" netns="/var/run/netns/cni-31143a7e-3467-7ab6-95ae-26a1f469e532" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.233 [INFO][5923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" iface="eth0" netns="/var/run/netns/cni-31143a7e-3467-7ab6-95ae-26a1f469e532" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.257 [INFO][5923] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" after=24.243335ms iface="eth0" netns="/var/run/netns/cni-31143a7e-3467-7ab6-95ae-26a1f469e532" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.257 [INFO][5923] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.257 [INFO][5923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.306 [INFO][5932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.306 [INFO][5932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.306 [INFO][5932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.388 [INFO][5932] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.388 [INFO][5932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.391 [INFO][5932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:26.394356 env[1260]: 2025-05-13 08:26:26.393 [INFO][5923] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:26.403601 env[1260]: time="2025-05-13T08:26:26.394636792Z" level=info msg="TearDown network for sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" successfully" May 13 08:26:26.403601 env[1260]: time="2025-05-13T08:26:26.394692710Z" level=info msg="StopPodSandbox for \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" returns successfully" May 13 08:26:26.401186 systemd[1]: run-netns-cni\x2d31143a7e\x2d3467\x2d7ab6\x2d95ae\x2d26a1f469e532.mount: Deactivated successfully. May 13 08:26:26.404784 env[1260]: time="2025-05-13T08:26:26.404751766Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:26:26.474637 kubelet[2219]: I0513 08:26:26.464907 2219 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:26.531280 kernel: kauditd_printk_skb: 35 callbacks suppressed May 13 08:26:26.531459 kernel: audit: type=1325 audit(1747124786.524:439): table=filter:138 family=2 entries=8 op=nft_register_rule pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:26.524000 audit[5966]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:26.524000 audit[5966]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd4264e680 a2=0 a3=7ffd4264e66c items=0 ppid=2353 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.539634 kernel: audit: type=1300 audit(1747124786.524:439): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd4264e680 a2=0 a3=7ffd4264e66c items=0 ppid=2353 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:26.543607 kernel: audit: type=1327 audit(1747124786.524:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:26.544000 audit[5966]: NETFILTER_CFG table=nat:139 family=2 entries=36 op=nft_register_rule pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:26.550630 kernel: audit: type=1325 audit(1747124786.544:440): table=nat:139 family=2 entries=36 op=nft_register_rule pid=5966 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:26.544000 audit[5966]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd4264e680 a2=0 a3=7ffd4264e66c items=0 ppid=2353 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.564680 kernel: audit: type=1300 audit(1747124786.544:440): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd4264e680 a2=0 a3=7ffd4264e66c items=0 ppid=2353 pid=5966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:26.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:26.568606 kernel: audit: type=1327 audit(1747124786.544:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.486 [WARNING][5954] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe", Pod:"calico-apiserver-769cd4b6f5-q7tml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a7d3c021c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.487 [INFO][5954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.487 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.487 [INFO][5954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.487 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.579 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.580 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.580 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.590 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.590 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.599 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:26.601748 env[1260]: 2025-05-13 08:26:26.600 [INFO][5954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:26.602437 env[1260]: time="2025-05-13T08:26:26.602389980Z" level=info msg="TearDown network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" successfully" May 13 08:26:26.602532 env[1260]: time="2025-05-13T08:26:26.602511543Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" returns successfully" May 13 08:26:26.623113 kubelet[2219]: I0513 08:26:26.623017 2219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6644f7cd55-tpq9g" podStartSLOduration=2.62297061 podStartE2EDuration="2.62297061s" podCreationTimestamp="2025-05-13 08:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:26:26.470726032 +0000 UTC m=+117.979246531" watchObservedRunningTime="2025-05-13 08:26:26.62297061 +0000 UTC m=+118.131491089" May 13 08:26:26.796324 kubelet[2219]: I0513 08:26:26.796190 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-calico-apiserver-certs\") pod \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" (UID: \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\") " May 13 08:26:26.796324 kubelet[2219]: I0513 08:26:26.796258 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ssm9\" (UniqueName: \"kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9\") pod \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\" (UID: \"ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7\") " May 13 08:26:26.806164 kubelet[2219]: I0513 08:26:26.806104 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" (UID: "ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:26:26.808829 kubelet[2219]: I0513 08:26:26.808782 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9" (OuterVolumeSpecName: "kube-api-access-9ssm9") pod "ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" (UID: "ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7"). InnerVolumeSpecName "kube-api-access-9ssm9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:26:26.811125 systemd[1]: var-lib-kubelet-pods-ae2bcff0\x2d8b46\x2d4bc4\x2d98a5\x2d5dba578d1ef7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 08:26:26.816413 systemd[1]: var-lib-kubelet-pods-ae2bcff0\x2d8b46\x2d4bc4\x2d98a5\x2d5dba578d1ef7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ssm9.mount: Deactivated successfully. May 13 08:26:26.897046 kubelet[2219]: I0513 08:26:26.896981 2219 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9ssm9\" (UniqueName: \"kubernetes.io/projected/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-kube-api-access-9ssm9\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:26.897046 kubelet[2219]: I0513 08:26:26.897030 2219 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7-calico-apiserver-certs\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:27.063763 systemd-networkd[1030]: calib14abaae3a4: Gained IPv6LL May 13 08:26:27.559000 audit[5973]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:27.559000 audit[5973]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffee15c6140 a2=0 a3=7ffee15c612c items=0 ppid=2353 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:27.572649 kernel: audit: type=1325 audit(1747124787.559:441): table=filter:140 family=2 entries=8 op=nft_register_rule pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:27.572863 kernel: audit: type=1300 audit(1747124787.559:441): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffee15c6140 a2=0 a3=7ffee15c612c items=0 ppid=2353 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:27.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:27.576369 kernel: audit: type=1327 audit(1747124787.559:441): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:27.580305 env[1260]: time="2025-05-13T08:26:27.580257799Z" level=info msg="StopContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" with timeout 30 (s)" May 13 08:26:27.581080 env[1260]: time="2025-05-13T08:26:27.581049538Z" level=info msg="Stop container \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" with signal terminated" May 13 08:26:27.606000 audit[5973]: NETFILTER_CFG table=nat:141 family=2 entries=40 op=nft_register_chain pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:27.606000 audit[5973]: SYSCALL arch=c000003e syscall=46 success=yes exit=13124 a0=3 a1=7ffee15c6140 a2=0 a3=7ffee15c612c items=0 ppid=2353 pid=5973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:27.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:27.611606 kernel: audit: type=1325 audit(1747124787.606:442): table=nat:141 family=2 entries=40 op=nft_register_chain pid=5973 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:27.659966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a-rootfs.mount: Deactivated successfully. May 13 08:26:27.667534 env[1260]: time="2025-05-13T08:26:27.667480016Z" level=info msg="shim disconnected" id=e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a May 13 08:26:27.667534 env[1260]: time="2025-05-13T08:26:27.667532537Z" level=warning msg="cleaning up after shim disconnected" id=e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a namespace=k8s.io May 13 08:26:27.667534 env[1260]: time="2025-05-13T08:26:27.667544930Z" level=info msg="cleaning up dead shim" May 13 08:26:27.676525 env[1260]: time="2025-05-13T08:26:27.676468270Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:26:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5992 runtime=io.containerd.runc.v2\n" May 13 08:26:27.689790 env[1260]: time="2025-05-13T08:26:27.689740904Z" level=info msg="StopContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" returns successfully" May 13 08:26:27.690621 env[1260]: time="2025-05-13T08:26:27.690568292Z" level=info msg="StopPodSandbox for \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\"" May 13 08:26:27.690735 env[1260]: time="2025-05-13T08:26:27.690707649Z" level=info msg="Container to stop \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:26:27.694254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980-shm.mount: Deactivated successfully. May 13 08:26:27.733892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980-rootfs.mount: Deactivated successfully. May 13 08:26:27.737986 env[1260]: time="2025-05-13T08:26:27.737937370Z" level=info msg="shim disconnected" id=af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980 May 13 08:26:27.738295 env[1260]: time="2025-05-13T08:26:27.738272182Z" level=warning msg="cleaning up after shim disconnected" id=af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980 namespace=k8s.io May 13 08:26:27.738408 env[1260]: time="2025-05-13T08:26:27.738389717Z" level=info msg="cleaning up dead shim" May 13 08:26:27.747086 env[1260]: time="2025-05-13T08:26:27.747055363Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:26:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6025 runtime=io.containerd.runc.v2\n" May 13 08:26:27.824967 systemd-networkd[1030]: cali602f5b1f3e7: Link DOWN May 13 08:26:27.825277 systemd-networkd[1030]: cali602f5b1f3e7: Lost carrier May 13 08:26:27.863000 audit[6068]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_rule pid=6068 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:27.863000 audit[6068]: SYSCALL arch=c000003e syscall=46 success=yes exit=9080 a0=3 a1=7ffe8447e8c0 a2=0 a3=7ffe8447e8ac items=0 ppid=4581 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:27.863000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:27.863000 audit[6068]: NETFILTER_CFG table=filter:143 family=2 entries=4 op=nft_unregister_chain pid=6068 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 13 08:26:27.863000 audit[6068]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffe8447e8c0 a2=0 a3=6f72662d696c6163 items=0 ppid=4581 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:27.863000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.820 [INFO][6052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.823 [INFO][6052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" iface="eth0" netns="/var/run/netns/cni-9bfa4cc1-d6b0-fe14-a85c-0c87d775b110" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.824 [INFO][6052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" iface="eth0" netns="/var/run/netns/cni-9bfa4cc1-d6b0-fe14-a85c-0c87d775b110" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.835 [INFO][6052] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" after=11.762496ms iface="eth0" netns="/var/run/netns/cni-9bfa4cc1-d6b0-fe14-a85c-0c87d775b110" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.835 [INFO][6052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.835 [INFO][6052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.873 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.873 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.873 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.934 [INFO][6059] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.934 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.937 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:27.939942 env[1260]: 2025-05-13 08:26:27.938 [INFO][6052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:27.943695 env[1260]: time="2025-05-13T08:26:27.943646926Z" level=info msg="TearDown network for sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" successfully" May 13 08:26:27.943821 env[1260]: time="2025-05-13T08:26:27.943798677Z" level=info msg="StopPodSandbox for \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" returns successfully" May 13 08:26:27.944369 systemd[1]: run-netns-cni\x2d9bfa4cc1\x2dd6b0\x2dfe14\x2da85c\x2d0c87d775b110.mount: Deactivated successfully. May 13 08:26:27.945663 env[1260]: time="2025-05-13T08:26:27.945636263Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.015 [WARNING][6082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0", GenerateName:"calico-apiserver-769cd4b6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3ad376-6a34-448c-8bd7-d065d8e46df2", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"769cd4b6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980", Pod:"calico-apiserver-769cd4b6f5-h8qgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali602f5b1f3e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.015 [INFO][6082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.015 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.015 [INFO][6082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.015 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.038 [INFO][6090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.038 [INFO][6090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.038 [INFO][6090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.053 [WARNING][6090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.053 [INFO][6090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.055 [INFO][6090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:28.057624 env[1260]: 2025-05-13 08:26:28.056 [INFO][6082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:28.058376 env[1260]: time="2025-05-13T08:26:28.058339247Z" level=info msg="TearDown network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" successfully" May 13 08:26:28.058466 env[1260]: time="2025-05-13T08:26:28.058444921Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" returns successfully" May 13 08:26:28.206070 kubelet[2219]: I0513 08:26:28.205984 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be3ad376-6a34-448c-8bd7-d065d8e46df2-calico-apiserver-certs\") pod \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" (UID: \"be3ad376-6a34-448c-8bd7-d065d8e46df2\") " May 13 08:26:28.206660 kubelet[2219]: I0513 08:26:28.206143 2219 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv8m6\" (UniqueName: \"kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6\") pod \"be3ad376-6a34-448c-8bd7-d065d8e46df2\" (UID: \"be3ad376-6a34-448c-8bd7-d065d8e46df2\") " May 13 08:26:28.213281 systemd[1]: var-lib-kubelet-pods-be3ad376\x2d6a34\x2d448c\x2d8bd7\x2dd065d8e46df2-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 08:26:28.215056 kubelet[2219]: I0513 08:26:28.215022 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3ad376-6a34-448c-8bd7-d065d8e46df2-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "be3ad376-6a34-448c-8bd7-d065d8e46df2" (UID: "be3ad376-6a34-448c-8bd7-d065d8e46df2"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:26:28.225399 systemd[1]: var-lib-kubelet-pods-be3ad376\x2d6a34\x2d448c\x2d8bd7\x2dd065d8e46df2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbv8m6.mount: Deactivated successfully. May 13 08:26:28.231419 kubelet[2219]: I0513 08:26:28.231272 2219 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6" (OuterVolumeSpecName: "kube-api-access-bv8m6") pod "be3ad376-6a34-448c-8bd7-d065d8e46df2" (UID: "be3ad376-6a34-448c-8bd7-d065d8e46df2"). InnerVolumeSpecName "kube-api-access-bv8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:26:28.306476 kubelet[2219]: I0513 08:26:28.306425 2219 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be3ad376-6a34-448c-8bd7-d065d8e46df2-calico-apiserver-certs\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:28.306476 kubelet[2219]: I0513 08:26:28.306460 2219 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bv8m6\" (UniqueName: \"kubernetes.io/projected/be3ad376-6a34-448c-8bd7-d065d8e46df2-kube-api-access-bv8m6\") on node \"ci-3510-3-7-n-f896a7891b.novalocal\" DevicePath \"\"" May 13 08:26:28.481941 kubelet[2219]: I0513 08:26:28.480445 2219 scope.go:117] "RemoveContainer" containerID="e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a" May 13 08:26:28.503657 env[1260]: time="2025-05-13T08:26:28.494813067Z" level=info msg="RemoveContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\"" May 13 08:26:28.510256 env[1260]: time="2025-05-13T08:26:28.509952747Z" level=info msg="RemoveContainer for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" returns successfully" May 13 08:26:28.511620 kubelet[2219]: I0513 08:26:28.511524 2219 scope.go:117] "RemoveContainer" containerID="e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a" May 13 08:26:28.520657 env[1260]: time="2025-05-13T08:26:28.514099433Z" level=error msg="ContainerStatus for \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\": not found" May 13 08:26:28.520946 kubelet[2219]: E0513 08:26:28.519624 2219 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\": not found" containerID="e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a" May 13 08:26:28.520946 kubelet[2219]: I0513 08:26:28.519707 2219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a"} err="failed to get container status \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4f41827b2d1a8957a730364f3b7b86db63d98ded2776d73e95aa3fcb885529a\": not found" May 13 08:26:28.636000 audit[6100]: NETFILTER_CFG table=filter:144 family=2 entries=8 op=nft_register_rule pid=6100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:28.636000 audit[6100]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff7417b6d0 a2=0 a3=7fff7417b6bc items=0 ppid=2353 pid=6100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:28.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:28.646000 audit[6100]: NETFILTER_CFG table=nat:145 family=2 entries=40 op=nft_unregister_chain pid=6100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:26:28.646000 audit[6100]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7fff7417b6d0 a2=0 a3=7fff7417b6bc items=0 ppid=2353 pid=6100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:26:28.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:26:28.684520 kubelet[2219]: I0513 08:26:28.684485 2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7" path="/var/lib/kubelet/pods/ae2bcff0-8b46-4bc4-98a5-5dba578d1ef7/volumes" May 13 08:26:28.685308 kubelet[2219]: I0513 08:26:28.685287 2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3ad376-6a34-448c-8bd7-d065d8e46df2" path="/var/lib/kubelet/pods/be3ad376-6a34-448c-8bd7-d065d8e46df2/volumes" May 13 08:26:28.716779 kubelet[2219]: I0513 08:26:28.716733 2219 scope.go:117] "RemoveContainer" containerID="8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59" May 13 08:26:28.723890 env[1260]: time="2025-05-13T08:26:28.723395458Z" level=info msg="RemoveContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\"" May 13 08:26:28.735683 env[1260]: time="2025-05-13T08:26:28.735070941Z" level=info msg="RemoveContainer for \"8eab7e8fb12c72fe46ff7e3ac026aa134ed276da4b1213c589d9b4a1582a1f59\" returns successfully" May 13 08:26:28.739718 env[1260]: time="2025-05-13T08:26:28.739617224Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.818 [WARNING][6118] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.818 [INFO][6118] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.818 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" iface="eth0" netns="" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.818 [INFO][6118] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.818 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.852 [INFO][6125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.852 [INFO][6125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.852 [INFO][6125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.862 [WARNING][6125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.862 [INFO][6125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.864 [INFO][6125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:28.866792 env[1260]: 2025-05-13 08:26:28.865 [INFO][6118] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.867460 env[1260]: time="2025-05-13T08:26:28.867409664Z" level=info msg="TearDown network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" successfully" May 13 08:26:28.867602 env[1260]: time="2025-05-13T08:26:28.867534784Z" level=info msg="StopPodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" returns successfully" May 13 08:26:28.868901 env[1260]: time="2025-05-13T08:26:28.868876949Z" level=info msg="RemovePodSandbox for \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:26:28.869099 env[1260]: time="2025-05-13T08:26:28.869026276Z" level=info msg="Forcibly stopping sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\"" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.914 [WARNING][6145] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.914 [INFO][6145] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.914 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" iface="eth0" netns="" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.914 [INFO][6145] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.914 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.940 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.940 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.940 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.949 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.949 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" HandleID="k8s-pod-network.912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--kube--controllers--5c9b5b8b87--ffc5v-eth0" May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.951 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:28.958162 env[1260]: 2025-05-13 08:26:28.954 [INFO][6145] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435" May 13 08:26:28.958821 env[1260]: time="2025-05-13T08:26:28.958786291Z" level=info msg="TearDown network for sandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" successfully" May 13 08:26:28.963388 env[1260]: time="2025-05-13T08:26:28.963357232Z" level=info msg="RemovePodSandbox \"912637247de89eee9f6986d0d63d59fb0be0bcc8a65b1ac45478f97dc7b03435\" returns successfully" May 13 08:26:28.964028 env[1260]: time="2025-05-13T08:26:28.964001598Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.011 [WARNING][6171] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.011 [INFO][6171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.011 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.011 [INFO][6171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.011 [INFO][6171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.054 [INFO][6178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.054 [INFO][6178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.054 [INFO][6178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.062 [WARNING][6178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.062 [INFO][6178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.063 [INFO][6178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.067224 env[1260]: 2025-05-13 08:26:29.065 [INFO][6171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.068000 env[1260]: time="2025-05-13T08:26:29.067962247Z" level=info msg="TearDown network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" successfully" May 13 08:26:29.068164 env[1260]: time="2025-05-13T08:26:29.068113527Z" level=info msg="StopPodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" returns successfully" May 13 08:26:29.068884 env[1260]: time="2025-05-13T08:26:29.068858727Z" level=info msg="RemovePodSandbox for \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:26:29.069014 env[1260]: time="2025-05-13T08:26:29.068973077Z" level=info msg="Forcibly stopping sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\"" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.118 [WARNING][6196] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.119 [INFO][6196] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.119 [INFO][6196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" iface="eth0" netns="" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.119 [INFO][6196] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.119 [INFO][6196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.145 [INFO][6203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.145 [INFO][6203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.145 [INFO][6203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.156 [WARNING][6203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.156 [INFO][6203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" HandleID="k8s-pod-network.454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.158 [INFO][6203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.160354 env[1260]: 2025-05-13 08:26:29.159 [INFO][6196] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b" May 13 08:26:29.160980 env[1260]: time="2025-05-13T08:26:29.160943634Z" level=info msg="TearDown network for sandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" successfully" May 13 08:26:29.165719 env[1260]: time="2025-05-13T08:26:29.165651699Z" level=info msg="RemovePodSandbox \"454a3d8ef9c92642426a401f405481d6691b7ce90302dbebcc763d5f76f7411b\" returns successfully" May 13 08:26:29.166390 env[1260]: time="2025-05-13T08:26:29.166362713Z" level=info msg="StopPodSandbox for \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\"" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.210 [WARNING][6221] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.210 [INFO][6221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.210 [INFO][6221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" iface="eth0" netns="" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.210 [INFO][6221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.211 [INFO][6221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.245 [INFO][6229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.246 [INFO][6229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.246 [INFO][6229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.253 [WARNING][6229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.253 [INFO][6229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.255 [INFO][6229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.258166 env[1260]: 2025-05-13 08:26:29.256 [INFO][6221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.258745 env[1260]: time="2025-05-13T08:26:29.258207049Z" level=info msg="TearDown network for sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" successfully" May 13 08:26:29.258745 env[1260]: time="2025-05-13T08:26:29.258246314Z" level=info msg="StopPodSandbox for \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" returns successfully" May 13 08:26:29.259298 env[1260]: time="2025-05-13T08:26:29.259266362Z" level=info msg="RemovePodSandbox for \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\"" May 13 08:26:29.259502 env[1260]: time="2025-05-13T08:26:29.259460574Z" level=info msg="Forcibly stopping sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\"" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.305 [WARNING][6247] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.305 [INFO][6247] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.305 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" iface="eth0" netns="" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.306 [INFO][6247] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.306 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.330 [INFO][6254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.330 [INFO][6254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.330 [INFO][6254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.343 [WARNING][6254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.343 [INFO][6254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" HandleID="k8s-pod-network.62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--q7tml-eth0" May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.345 [INFO][6254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.360822 env[1260]: 2025-05-13 08:26:29.357 [INFO][6247] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe" May 13 08:26:29.360822 env[1260]: time="2025-05-13T08:26:29.359656467Z" level=info msg="TearDown network for sandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" successfully" May 13 08:26:29.376211 env[1260]: time="2025-05-13T08:26:29.375058131Z" level=info msg="RemovePodSandbox \"62a3793831a1a5be2fab69442dccbfb28eca8a83808cf462f0bcfe5c1dcc59fe\" returns successfully" May 13 08:26:29.379028 env[1260]: time="2025-05-13T08:26:29.378940321Z" level=info msg="StopPodSandbox for \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\"" May 13 08:26:29.379344 env[1260]: time="2025-05-13T08:26:29.379259834Z" level=info msg="TearDown network for sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" successfully" May 13 08:26:29.379472 env[1260]: time="2025-05-13T08:26:29.379450561Z" level=info msg="StopPodSandbox for \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" returns successfully" May 13 08:26:29.380055 env[1260]: time="2025-05-13T08:26:29.379992240Z" level=info msg="RemovePodSandbox for \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\"" May 13 08:26:29.382763 env[1260]: time="2025-05-13T08:26:29.382694646Z" level=info msg="Forcibly stopping sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\"" May 13 08:26:29.382870 env[1260]: time="2025-05-13T08:26:29.382843251Z" level=info msg="TearDown network for sandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" successfully" May 13 08:26:29.388925 env[1260]: time="2025-05-13T08:26:29.388883351Z" level=info msg="RemovePodSandbox \"ba85cea17c667e17cf8e990b20f4e6b74743da1ad494755e52e64f03d7163288\" returns successfully" May 13 08:26:29.389300 env[1260]: time="2025-05-13T08:26:29.389267930Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.432 [WARNING][6273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033c33ae-894a-48f0-a6ac-c8632ff173d5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125", Pod:"coredns-7db6d8ff4d-fx4p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b937709769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.433 [INFO][6273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.433 [INFO][6273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" iface="eth0" netns="" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.433 [INFO][6273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.433 [INFO][6273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.466 [INFO][6280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.466 [INFO][6280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.466 [INFO][6280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.474 [WARNING][6280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.474 [INFO][6280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.475 [INFO][6280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.478356 env[1260]: 2025-05-13 08:26:29.477 [INFO][6273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.479402 env[1260]: time="2025-05-13T08:26:29.479366074Z" level=info msg="TearDown network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" successfully" May 13 08:26:29.479493 env[1260]: time="2025-05-13T08:26:29.479472228Z" level=info msg="StopPodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" returns successfully" May 13 08:26:29.480440 env[1260]: time="2025-05-13T08:26:29.480071849Z" level=info msg="RemovePodSandbox for \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:26:29.480440 env[1260]: time="2025-05-13T08:26:29.480121674Z" level=info msg="Forcibly stopping sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\"" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.523 [WARNING][6300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"033c33ae-894a-48f0-a6ac-c8632ff173d5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"433b2daea347d6818efa93ba39394707bd416ff5f354fb652b14a4e6d896b125", Pod:"coredns-7db6d8ff4d-fx4p8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0b937709769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.524 [INFO][6300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.524 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" iface="eth0" netns="" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.524 [INFO][6300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.524 [INFO][6300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.556 [INFO][6307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.556 [INFO][6307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.557 [INFO][6307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.570 [WARNING][6307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.570 [INFO][6307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" HandleID="k8s-pod-network.06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--fx4p8-eth0" May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.574 [INFO][6307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.577517 env[1260]: 2025-05-13 08:26:29.575 [INFO][6300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd" May 13 08:26:29.578154 env[1260]: time="2025-05-13T08:26:29.577543543Z" level=info msg="TearDown network for sandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" successfully" May 13 08:26:29.581786 env[1260]: time="2025-05-13T08:26:29.581750496Z" level=info msg="RemovePodSandbox \"06cac7ef774571e653510c152a842d345794ca584abf0c0370866ad632e7aedd\" returns successfully" May 13 08:26:29.582629 env[1260]: time="2025-05-13T08:26:29.582561151Z" level=info msg="StopPodSandbox for \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\"" May 13 08:26:29.582849 env[1260]: time="2025-05-13T08:26:29.582804078Z" level=info msg="TearDown network for sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" successfully" May 13 08:26:29.582944 env[1260]: time="2025-05-13T08:26:29.582924028Z" level=info msg="StopPodSandbox for \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" returns successfully" May 13 08:26:29.583366 env[1260]: time="2025-05-13T08:26:29.583343102Z" level=info msg="RemovePodSandbox for \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\"" May 13 08:26:29.583475 env[1260]: time="2025-05-13T08:26:29.583441080Z" level=info msg="Forcibly stopping sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\"" May 13 08:26:29.583615 env[1260]: time="2025-05-13T08:26:29.583569727Z" level=info msg="TearDown network for sandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" successfully" May 13 08:26:29.587810 env[1260]: time="2025-05-13T08:26:29.587779255Z" level=info msg="RemovePodSandbox \"bbf505db7fde1f53c26fd044a77307fb04d2d031c731769e4bb19fffb199a7f2\" returns successfully" May 13 08:26:29.588362 env[1260]: time="2025-05-13T08:26:29.588307689Z" level=info msg="StopPodSandbox for \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\"" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.642 [WARNING][6326] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.642 [INFO][6326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.642 [INFO][6326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" iface="eth0" netns="" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.642 [INFO][6326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.642 [INFO][6326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.672 [INFO][6333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.672 [INFO][6333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.672 [INFO][6333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.680 [WARNING][6333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.680 [INFO][6333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.684 [INFO][6333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.686426 env[1260]: 2025-05-13 08:26:29.685 [INFO][6326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.687065 env[1260]: time="2025-05-13T08:26:29.687029201Z" level=info msg="TearDown network for sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" successfully" May 13 08:26:29.687158 env[1260]: time="2025-05-13T08:26:29.687137880Z" level=info msg="StopPodSandbox for \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" returns successfully" May 13 08:26:29.687995 env[1260]: time="2025-05-13T08:26:29.687940420Z" level=info msg="RemovePodSandbox for \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\"" May 13 08:26:29.688069 env[1260]: time="2025-05-13T08:26:29.687997559Z" level=info msg="Forcibly stopping sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\"" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.753 [WARNING][6351] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.754 [INFO][6351] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.754 [INFO][6351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" iface="eth0" netns="" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.754 [INFO][6351] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.754 [INFO][6351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.779 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.779 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.779 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.789 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.789 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" HandleID="k8s-pod-network.af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.791 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.793840 env[1260]: 2025-05-13 08:26:29.792 [INFO][6351] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980" May 13 08:26:29.794629 env[1260]: time="2025-05-13T08:26:29.793894532Z" level=info msg="TearDown network for sandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" successfully" May 13 08:26:29.799380 env[1260]: time="2025-05-13T08:26:29.799003706Z" level=info msg="RemovePodSandbox \"af7d9b07756f387525055b0af3e375c8b6d27e5ac750b9684a83d5d831d83980\" returns successfully" May 13 08:26:29.799793 env[1260]: time="2025-05-13T08:26:29.799756461Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.850 [WARNING][6377] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.850 [INFO][6377] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.850 [INFO][6377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.850 [INFO][6377] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.850 [INFO][6377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.882 [INFO][6384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.882 [INFO][6384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.882 [INFO][6384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.890 [WARNING][6384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.890 [INFO][6384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.891 [INFO][6384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:29.894018 env[1260]: 2025-05-13 08:26:29.892 [INFO][6377] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:29.894514 env[1260]: time="2025-05-13T08:26:29.894059805Z" level=info msg="TearDown network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" successfully" May 13 08:26:29.894514 env[1260]: time="2025-05-13T08:26:29.894108509Z" level=info msg="StopPodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" returns successfully" May 13 08:26:29.895025 env[1260]: time="2025-05-13T08:26:29.894998156Z" level=info msg="RemovePodSandbox for \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:26:29.895084 env[1260]: time="2025-05-13T08:26:29.895037942Z" level=info msg="Forcibly stopping sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\"" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:29.972 [WARNING][6403] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" WorkloadEndpoint="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:29.972 [INFO][6403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:29.972 [INFO][6403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" iface="eth0" netns="" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:29.972 [INFO][6403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:29.972 [INFO][6403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.019 [INFO][6410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.019 [INFO][6410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.019 [INFO][6410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.027 [WARNING][6410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.027 [INFO][6410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" HandleID="k8s-pod-network.8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--769cd4b6f5--h8qgj-eth0" May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.028 [INFO][6410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.032610 env[1260]: 2025-05-13 08:26:30.030 [INFO][6403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7" May 13 08:26:30.033533 env[1260]: time="2025-05-13T08:26:30.033496405Z" level=info msg="TearDown network for sandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" successfully" May 13 08:26:30.038496 env[1260]: time="2025-05-13T08:26:30.038433090Z" level=info msg="RemovePodSandbox \"8728d9161b07bfc52162bd3dd4379bf030f462a3ed3a4351d7f63aa2b82b0aa7\" returns successfully" May 13 08:26:30.039463 env[1260]: time="2025-05-13T08:26:30.039423400Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.090 [WARNING][6428] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b8bef73-58a7-4997-947c-91687cbacd52", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c", Pod:"coredns-7db6d8ff4d-m5nkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1b2a54de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.090 [INFO][6428] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.090 [INFO][6428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" iface="eth0" netns="" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.090 [INFO][6428] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.090 [INFO][6428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.129 [INFO][6435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.130 [INFO][6435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.130 [INFO][6435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.139 [WARNING][6435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.140 [INFO][6435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.141 [INFO][6435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.144170 env[1260]: 2025-05-13 08:26:30.143 [INFO][6428] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.144170 env[1260]: time="2025-05-13T08:26:30.144089262Z" level=info msg="TearDown network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" successfully" May 13 08:26:30.144170 env[1260]: time="2025-05-13T08:26:30.144125101Z" level=info msg="StopPodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" returns successfully" May 13 08:26:30.145096 env[1260]: time="2025-05-13T08:26:30.145061729Z" level=info msg="RemovePodSandbox for \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:26:30.145245 env[1260]: time="2025-05-13T08:26:30.145187019Z" level=info msg="Forcibly stopping sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\"" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.189 [WARNING][6454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b8bef73-58a7-4997-947c-91687cbacd52", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"dbdafbdc698b96c629b8b1c926bf438f76423977e29d56295e7d17dc6914639c", Pod:"coredns-7db6d8ff4d-m5nkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1b2a54de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.189 [INFO][6454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.190 [INFO][6454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" iface="eth0" netns="" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.190 [INFO][6454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.190 [INFO][6454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.223 [INFO][6461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.223 [INFO][6461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.223 [INFO][6461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.233 [WARNING][6461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.233 [INFO][6461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" HandleID="k8s-pod-network.781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-coredns--7db6d8ff4d--m5nkr-eth0" May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.235 [INFO][6461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.237472 env[1260]: 2025-05-13 08:26:30.236 [INFO][6454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191" May 13 08:26:30.238209 env[1260]: time="2025-05-13T08:26:30.238159269Z" level=info msg="TearDown network for sandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" successfully" May 13 08:26:30.242518 env[1260]: time="2025-05-13T08:26:30.242474791Z" level=info msg="RemovePodSandbox \"781ee04e009fe8ae0c4dac7f7a85d5dcda8a717827595b9d9ad7808b70b41191\" returns successfully" May 13 08:26:30.243254 env[1260]: time="2025-05-13T08:26:30.243219230Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.296 [WARNING][6480] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"00446d97-a96a-4df2-93a8-5f3d59494b3b", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb", Pod:"calico-apiserver-6644f7cd55-wdj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif086f8d41c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.297 [INFO][6480] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.297 [INFO][6480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" iface="eth0" netns="" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.297 [INFO][6480] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.297 [INFO][6480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.354 [INFO][6487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.354 [INFO][6487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.354 [INFO][6487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.365 [WARNING][6487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.365 [INFO][6487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.369 [INFO][6487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.371655 env[1260]: 2025-05-13 08:26:30.370 [INFO][6480] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.372185 env[1260]: time="2025-05-13T08:26:30.371665851Z" level=info msg="TearDown network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" successfully" May 13 08:26:30.372185 env[1260]: time="2025-05-13T08:26:30.371698654Z" level=info msg="StopPodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" returns successfully" May 13 08:26:30.372501 env[1260]: time="2025-05-13T08:26:30.372467009Z" level=info msg="RemovePodSandbox for \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:26:30.372663 env[1260]: time="2025-05-13T08:26:30.372609893Z" level=info msg="Forcibly stopping sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\"" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.440 [WARNING][6505] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0", GenerateName:"calico-apiserver-6644f7cd55-", Namespace:"calico-apiserver", SelfLink:"", UID:"00446d97-a96a-4df2-93a8-5f3d59494b3b", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f7cd55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"ff61348ebc881a62c3740ef50640b3a66230b21c27066439f00fac72a3eb6bbb", Pod:"calico-apiserver-6644f7cd55-wdj2v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif086f8d41c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.440 [INFO][6505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.440 [INFO][6505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" iface="eth0" netns="" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.441 [INFO][6505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.441 [INFO][6505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.495 [INFO][6512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.496 [INFO][6512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.496 [INFO][6512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.506 [WARNING][6512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.506 [INFO][6512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" HandleID="k8s-pod-network.a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-calico--apiserver--6644f7cd55--wdj2v-eth0" May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.510 [INFO][6512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.519025 env[1260]: 2025-05-13 08:26:30.515 [INFO][6505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d" May 13 08:26:30.521477 env[1260]: time="2025-05-13T08:26:30.521404285Z" level=info msg="TearDown network for sandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" successfully" May 13 08:26:30.527916 env[1260]: time="2025-05-13T08:26:30.527848002Z" level=info msg="RemovePodSandbox \"a09d4553979f9cf6342c0d22be5bd79937208fb85131040ca6c20fb92645812d\" returns successfully" May 13 08:26:30.529154 env[1260]: time="2025-05-13T08:26:30.529115584Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.586 [WARNING][6531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3a34ed8-4b6e-4268-a42b-192aa9ef609b", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3", Pod:"csi-node-driver-zls78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc9d348b8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.586 [INFO][6531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.586 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" iface="eth0" netns="" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.586 [INFO][6531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.586 [INFO][6531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.629 [INFO][6539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.629 [INFO][6539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.629 [INFO][6539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.638 [WARNING][6539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.638 [INFO][6539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.640 [INFO][6539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.646931 env[1260]: 2025-05-13 08:26:30.643 [INFO][6531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.646931 env[1260]: time="2025-05-13T08:26:30.644819691Z" level=info msg="TearDown network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" successfully" May 13 08:26:30.646931 env[1260]: time="2025-05-13T08:26:30.644856382Z" level=info msg="StopPodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" returns successfully" May 13 08:26:30.646931 env[1260]: time="2025-05-13T08:26:30.645355869Z" level=info msg="RemovePodSandbox for \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:26:30.646931 env[1260]: time="2025-05-13T08:26:30.645390316Z" level=info msg="Forcibly stopping sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\"" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.707 [WARNING][6559] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3a34ed8-4b6e-4268-a42b-192aa9ef609b", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 8, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-n-f896a7891b.novalocal", ContainerID:"fd4b1c73479bc44d91c8b315909369e381e81c74fbbab52387e901714b9ff6c3", Pod:"csi-node-driver-zls78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc9d348b8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.707 [INFO][6559] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.707 [INFO][6559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" iface="eth0" netns="" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.707 [INFO][6559] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.707 [INFO][6559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.754 [INFO][6568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.754 [INFO][6568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.754 [INFO][6568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.782 [WARNING][6568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.782 [INFO][6568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" HandleID="k8s-pod-network.f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" Workload="ci--3510--3--7--n--f896a7891b.novalocal-k8s-csi--node--driver--zls78-eth0" May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.785 [INFO][6568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 08:26:30.795659 env[1260]: 2025-05-13 08:26:30.786 [INFO][6559] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c" May 13 08:26:30.796881 env[1260]: time="2025-05-13T08:26:30.796828842Z" level=info msg="TearDown network for sandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" successfully" May 13 08:26:30.802036 env[1260]: time="2025-05-13T08:26:30.801982494Z" level=info msg="RemovePodSandbox \"f46de8a1e869e3648bd147e8bc907c311b72d7cfa18887c05ec9319fd57a4b1c\" returns successfully" May 13 08:26:37.761458 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.QVC1WC.mount: Deactivated successfully. May 13 08:27:07.758015 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.ndyVXd.mount: Deactivated successfully. May 13 08:27:07.849202 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.Ev2gaZ.mount: Deactivated successfully. May 13 08:27:14.756544 update_engine[1242]: I0513 08:27:14.755975 1242 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 08:27:14.756544 update_engine[1242]: I0513 08:27:14.756202 1242 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 08:27:14.761673 update_engine[1242]: I0513 08:27:14.761402 1242 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 08:27:14.763982 update_engine[1242]: I0513 08:27:14.763926 1242 omaha_request_params.cc:62] Current group set to lts May 13 08:27:14.767398 update_engine[1242]: I0513 08:27:14.767350 1242 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 08:27:14.767398 update_engine[1242]: I0513 08:27:14.767385 1242 update_attempter.cc:643] Scheduling an action processor start. May 13 08:27:14.767874 update_engine[1242]: I0513 08:27:14.767477 1242 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 08:27:14.767874 update_engine[1242]: I0513 08:27:14.767637 1242 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 08:27:14.767874 update_engine[1242]: I0513 08:27:14.767789 1242 omaha_request_action.cc:270] Posting an Omaha request to disabled May 13 08:27:14.767874 update_engine[1242]: I0513 08:27:14.767805 1242 omaha_request_action.cc:271] Request: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: May 13 08:27:14.767874 update_engine[1242]: I0513 08:27:14.767814 1242 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 08:27:14.776263 locksmithd[1299]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 08:27:14.778516 update_engine[1242]: I0513 08:27:14.778450 1242 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 08:27:14.779648 update_engine[1242]: E0513 08:27:14.779539 1242 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 08:27:14.779875 update_engine[1242]: I0513 08:27:14.779788 1242 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 08:27:24.669227 update_engine[1242]: I0513 08:27:24.668828 1242 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 08:27:24.672227 update_engine[1242]: I0513 08:27:24.669819 1242 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 08:27:24.672227 update_engine[1242]: E0513 08:27:24.670157 1242 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 08:27:24.672227 update_engine[1242]: I0513 08:27:24.670552 1242 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 08:27:24.776912 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.eOGlwo.mount: Deactivated successfully. May 13 08:27:34.669541 update_engine[1242]: I0513 08:27:34.669195 1242 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 08:27:34.672292 update_engine[1242]: I0513 08:27:34.670305 1242 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 08:27:34.672292 update_engine[1242]: E0513 08:27:34.670725 1242 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 08:27:34.672292 update_engine[1242]: I0513 08:27:34.670979 1242 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 08:27:44.668193 update_engine[1242]: I0513 08:27:44.668093 1242 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 08:27:44.669625 update_engine[1242]: I0513 08:27:44.668692 1242 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 08:27:44.669625 update_engine[1242]: E0513 08:27:44.668900 1242 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 08:27:44.669625 update_engine[1242]: I0513 08:27:44.669036 1242 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 08:27:44.669625 update_engine[1242]: I0513 08:27:44.669075 1242 omaha_request_action.cc:621] Omaha request response: May 13 08:27:44.669625 update_engine[1242]: E0513 08:27:44.669560 1242 omaha_request_action.cc:640] Omaha request network transfer failed. May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669729 1242 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669741 1242 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669749 1242 update_attempter.cc:306] Processing Done. May 13 08:27:44.670284 update_engine[1242]: E0513 08:27:44.669890 1242 update_attempter.cc:619] Update failed. May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669929 1242 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669942 1242 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 08:27:44.670284 update_engine[1242]: I0513 08:27:44.669968 1242 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.671257 1242 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.671447 1242 omaha_request_action.cc:270] Posting an Omaha request to disabled May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.671460 1242 omaha_request_action.cc:271] Request: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.671470 1242 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672075 1242 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 08:27:44.672815 update_engine[1242]: E0513 08:27:44.672310 1242 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672448 1242 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672463 1242 omaha_request_action.cc:621] Omaha request response: May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672474 1242 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672483 1242 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672490 1242 update_attempter.cc:306] Processing Done. May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672497 1242 update_attempter.cc:310] Error event sent. May 13 08:27:44.672815 update_engine[1242]: I0513 08:27:44.672544 1242 update_check_scheduler.cc:74] Next update check in 49m25s May 13 08:27:44.676344 locksmithd[1299]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 08:27:44.676344 locksmithd[1299]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 08:28:07.737655 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.6jZc53.mount: Deactivated successfully. May 13 08:28:24.791412 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.lXNq6l.mount: Deactivated successfully. May 13 08:28:37.795905 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.WK3lSU.mount: Deactivated successfully. May 13 08:28:54.811980 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.471VER.mount: Deactivated successfully. May 13 08:29:07.846692 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.LXeGCy.mount: Deactivated successfully. May 13 08:29:11.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.25:22-172.24.4.1:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:11.290872 systemd[1]: Started sshd@9-172.24.4.25:22-172.24.4.1:40826.service. May 13 08:29:11.292980 kernel: kauditd_printk_skb: 14 callbacks suppressed May 13 08:29:11.293138 kernel: audit: type=1130 audit(1747124951.289:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.25:22-172.24.4.1:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:12.478000 audit[6949]: USER_ACCT pid=6949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.483514 sshd[6949]: Accepted publickey for core from 172.24.4.1 port 40826 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:12.495650 kernel: audit: type=1101 audit(1747124952.478:448): pid=6949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.493000 audit[6949]: CRED_ACQ pid=6949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.499546 sshd[6949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:12.510830 kernel: audit: type=1103 audit(1747124952.493:449): pid=6949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.522627 kernel: audit: type=1006 audit(1747124952.493:450): pid=6949 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 13 08:29:12.493000 audit[6949]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb4828a90 a2=3 a3=0 items=0 ppid=1 pid=6949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:12.545643 kernel: audit: type=1300 audit(1747124952.493:450): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb4828a90 a2=3 a3=0 items=0 ppid=1 pid=6949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:12.493000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:12.559194 kernel: audit: type=1327 audit(1747124952.493:450): proctitle=737368643A20636F7265205B707269765D May 13 08:29:12.565187 systemd-logind[1240]: New session 10 of user core. May 13 08:29:12.567532 systemd[1]: Started session-10.scope. May 13 08:29:12.581000 audit[6949]: USER_START pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.581000 audit[6952]: CRED_ACQ pid=6952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.598923 kernel: audit: type=1105 audit(1747124952.581:451): pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:12.599064 kernel: audit: type=1103 audit(1747124952.581:452): pid=6952 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:13.216508 sshd[6949]: pam_unix(sshd:session): session closed for user core May 13 08:29:13.218000 audit[6949]: USER_END pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:13.222824 systemd[1]: sshd@9-172.24.4.25:22-172.24.4.1:40826.service: Deactivated successfully. May 13 08:29:13.224135 systemd[1]: session-10.scope: Deactivated successfully. May 13 08:29:13.235501 systemd-logind[1240]: Session 10 logged out. Waiting for processes to exit. May 13 08:29:13.236785 systemd-logind[1240]: Removed session 10. May 13 08:29:13.243231 kernel: audit: type=1106 audit(1747124953.218:453): pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:13.244015 kernel: audit: type=1104 audit(1747124953.218:454): pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:13.218000 audit[6949]: CRED_DISP pid=6949 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:13.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.25:22-172.24.4.1:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:18.229195 systemd[1]: Started sshd@10-172.24.4.25:22-172.24.4.1:39338.service. May 13 08:29:18.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.25:22-172.24.4.1:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:18.249052 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:29:18.249302 kernel: audit: type=1130 audit(1747124958.230:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.25:22-172.24.4.1:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:19.472000 audit[6965]: USER_ACCT pid=6965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.475310 sshd[6965]: Accepted publickey for core from 172.24.4.1 port 39338 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:19.488723 kernel: audit: type=1101 audit(1747124959.472:457): pid=6965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.491243 sshd[6965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:19.489000 audit[6965]: CRED_ACQ pid=6965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.509370 kernel: audit: type=1103 audit(1747124959.489:458): pid=6965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.509710 kernel: audit: type=1006 audit(1747124959.489:459): pid=6965 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 13 08:29:19.508298 systemd[1]: Started session-11.scope. May 13 08:29:19.512877 systemd-logind[1240]: New session 11 of user core. May 13 08:29:19.489000 audit[6965]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29fbd850 a2=3 a3=0 items=0 ppid=1 pid=6965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:19.539858 kernel: audit: type=1300 audit(1747124959.489:459): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29fbd850 a2=3 a3=0 items=0 ppid=1 pid=6965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:19.540468 kernel: audit: type=1327 audit(1747124959.489:459): proctitle=737368643A20636F7265205B707269765D May 13 08:29:19.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:19.548000 audit[6965]: USER_START pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.557760 kernel: audit: type=1105 audit(1747124959.548:460): pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.557000 audit[6968]: CRED_ACQ pid=6968 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:19.564724 kernel: audit: type=1103 audit(1747124959.557:461): pid=6968 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:20.329985 sshd[6965]: pam_unix(sshd:session): session closed for user core May 13 08:29:20.332000 audit[6965]: USER_END pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:20.349658 kernel: audit: type=1106 audit(1747124960.332:462): pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:20.350572 systemd[1]: sshd@10-172.24.4.25:22-172.24.4.1:39338.service: Deactivated successfully. May 13 08:29:20.369127 kernel: audit: type=1104 audit(1747124960.332:463): pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:20.332000 audit[6965]: CRED_DISP pid=6965 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:20.366370 systemd[1]: session-11.scope: Deactivated successfully. May 13 08:29:20.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.25:22-172.24.4.1:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:20.368264 systemd-logind[1240]: Session 11 logged out. Waiting for processes to exit. May 13 08:29:20.371202 systemd-logind[1240]: Removed session 11. May 13 08:29:24.736998 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.wH2HZc.mount: Deactivated successfully. May 13 08:29:25.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.25:22-172.24.4.1:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:25.337157 systemd[1]: Started sshd@11-172.24.4.25:22-172.24.4.1:35920.service. May 13 08:29:25.354277 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:29:25.354478 kernel: audit: type=1130 audit(1747124965.337:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.25:22-172.24.4.1:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:26.605000 audit[7000]: USER_ACCT pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.606801 sshd[7000]: Accepted publickey for core from 172.24.4.1 port 35920 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:26.614646 kernel: audit: type=1101 audit(1747124966.605:466): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.615139 sshd[7000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:26.613000 audit[7000]: CRED_ACQ pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.623645 kernel: audit: type=1103 audit(1747124966.613:467): pid=7000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.633635 kernel: audit: type=1006 audit(1747124966.613:468): pid=7000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 May 13 08:29:26.613000 audit[7000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcfcab0e0 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:26.642869 kernel: audit: type=1300 audit(1747124966.613:468): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcfcab0e0 a2=3 a3=0 items=0 ppid=1 pid=7000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:26.643099 kernel: audit: type=1327 audit(1747124966.613:468): proctitle=737368643A20636F7265205B707269765D May 13 08:29:26.613000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:26.651304 systemd-logind[1240]: New session 12 of user core. May 13 08:29:26.653907 systemd[1]: Started session-12.scope. May 13 08:29:26.661000 audit[7000]: USER_START pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.681677 kernel: audit: type=1105 audit(1747124966.661:469): pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.670000 audit[7003]: CRED_ACQ pid=7003 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:26.697653 kernel: audit: type=1103 audit(1747124966.670:470): pid=7003 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:27.411196 sshd[7000]: pam_unix(sshd:session): session closed for user core May 13 08:29:27.413000 audit[7000]: USER_END pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:27.417525 systemd[1]: sshd@11-172.24.4.25:22-172.24.4.1:35920.service: Deactivated successfully. May 13 08:29:27.419524 systemd[1]: session-12.scope: Deactivated successfully. May 13 08:29:27.430859 kernel: audit: type=1106 audit(1747124967.413:471): pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:27.413000 audit[7000]: CRED_DISP pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:27.449930 systemd-logind[1240]: Session 12 logged out. Waiting for processes to exit. May 13 08:29:27.451152 kernel: audit: type=1104 audit(1747124967.413:472): pid=7000 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:27.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.25:22-172.24.4.1:35920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:27.456672 systemd-logind[1240]: Removed session 12. May 13 08:29:32.434141 systemd[1]: Started sshd@12-172.24.4.25:22-172.24.4.1:35932.service. May 13 08:29:32.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.25:22-172.24.4.1:35932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:32.441121 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:29:32.441368 kernel: audit: type=1130 audit(1747124972.435:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.25:22-172.24.4.1:35932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:33.474000 audit[7015]: USER_ACCT pid=7015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.476454 sshd[7015]: Accepted publickey for core from 172.24.4.1 port 35932 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:33.492027 kernel: audit: type=1101 audit(1747124973.474:475): pid=7015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.493000 audit[7015]: CRED_ACQ pid=7015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.495476 sshd[7015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:33.508829 kernel: audit: type=1103 audit(1747124973.493:476): pid=7015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.493000 audit[7015]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5a2b3bb0 a2=3 a3=0 items=0 ppid=1 pid=7015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:33.533644 kernel: audit: type=1006 audit(1747124973.493:477): pid=7015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 13 08:29:33.534078 kernel: audit: type=1300 audit(1747124973.493:477): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5a2b3bb0 a2=3 a3=0 items=0 ppid=1 pid=7015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:33.493000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:33.542567 kernel: audit: type=1327 audit(1747124973.493:477): proctitle=737368643A20636F7265205B707269765D May 13 08:29:33.550814 systemd-logind[1240]: New session 13 of user core. May 13 08:29:33.554393 systemd[1]: Started session-13.scope. May 13 08:29:33.569000 audit[7015]: USER_START pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.588897 kernel: audit: type=1105 audit(1747124973.569:478): pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.588000 audit[7018]: CRED_ACQ pid=7018 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:33.595647 kernel: audit: type=1103 audit(1747124973.588:479): pid=7018 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:34.153415 sshd[7015]: pam_unix(sshd:session): session closed for user core May 13 08:29:34.158000 audit[7015]: USER_END pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:34.176640 kernel: audit: type=1106 audit(1747124974.158:480): pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:34.159000 audit[7015]: CRED_DISP pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:34.178342 systemd[1]: Started sshd@13-172.24.4.25:22-172.24.4.1:35336.service. May 13 08:29:34.182423 systemd[1]: sshd@12-172.24.4.25:22-172.24.4.1:35932.service: Deactivated successfully. May 13 08:29:34.191892 kernel: audit: type=1104 audit(1747124974.159:481): pid=7015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:34.185458 systemd[1]: session-13.scope: Deactivated successfully. May 13 08:29:34.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.25:22-172.24.4.1:35336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:34.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.25:22-172.24.4.1:35932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:34.195139 systemd-logind[1240]: Session 13 logged out. Waiting for processes to exit. May 13 08:29:34.198699 systemd-logind[1240]: Removed session 13. May 13 08:29:35.327000 audit[7028]: USER_ACCT pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:35.328888 sshd[7028]: Accepted publickey for core from 172.24.4.1 port 35336 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:35.330000 audit[7028]: CRED_ACQ pid=7028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:35.331000 audit[7028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd41c0adb0 a2=3 a3=0 items=0 ppid=1 pid=7028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:35.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:35.333938 sshd[7028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:35.348758 systemd[1]: Started session-14.scope. May 13 08:29:35.349248 systemd-logind[1240]: New session 14 of user core. May 13 08:29:35.370000 audit[7028]: USER_START pid=7028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:35.374000 audit[7033]: CRED_ACQ pid=7033 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:36.197103 sshd[7028]: pam_unix(sshd:session): session closed for user core May 13 08:29:36.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.25:22-172.24.4.1:35338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:36.209000 audit[7028]: USER_END pid=7028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:36.209000 audit[7028]: CRED_DISP pid=7028 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:36.208229 systemd[1]: Started sshd@14-172.24.4.25:22-172.24.4.1:35338.service. May 13 08:29:36.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.25:22-172.24.4.1:35336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:36.214263 systemd[1]: sshd@13-172.24.4.25:22-172.24.4.1:35336.service: Deactivated successfully. May 13 08:29:36.217015 systemd[1]: session-14.scope: Deactivated successfully. May 13 08:29:36.227447 systemd-logind[1240]: Session 14 logged out. Waiting for processes to exit. May 13 08:29:36.231779 systemd-logind[1240]: Removed session 14. May 13 08:29:37.624375 sshd[7039]: Accepted publickey for core from 172.24.4.1 port 35338 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:37.629966 kernel: kauditd_printk_skb: 13 callbacks suppressed May 13 08:29:37.630144 kernel: audit: type=1101 audit(1747124977.622:493): pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.622000 audit[7039]: USER_ACCT pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.629514 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:37.628000 audit[7039]: CRED_ACQ pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.654703 kernel: audit: type=1103 audit(1747124977.628:494): pid=7039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.654974 kernel: audit: type=1006 audit(1747124977.628:495): pid=7039 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 13 08:29:37.628000 audit[7039]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1c0cb4e0 a2=3 a3=0 items=0 ppid=1 pid=7039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:37.680720 kernel: audit: type=1300 audit(1747124977.628:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1c0cb4e0 a2=3 a3=0 items=0 ppid=1 pid=7039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:37.628000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:37.685670 kernel: audit: type=1327 audit(1747124977.628:495): proctitle=737368643A20636F7265205B707269765D May 13 08:29:37.689990 systemd-logind[1240]: New session 15 of user core. May 13 08:29:37.690266 systemd[1]: Started session-15.scope. May 13 08:29:37.705000 audit[7039]: USER_START pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.719859 kernel: audit: type=1105 audit(1747124977.705:496): pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.717000 audit[7044]: CRED_ACQ pid=7044 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.728642 kernel: audit: type=1103 audit(1747124977.717:497): pid=7044 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:37.772954 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.orgq1u.mount: Deactivated successfully. May 13 08:29:38.307634 sshd[7039]: pam_unix(sshd:session): session closed for user core May 13 08:29:38.308000 audit[7039]: USER_END pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:38.326652 kernel: audit: type=1106 audit(1747124978.308:498): pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:38.327118 systemd[1]: sshd@14-172.24.4.25:22-172.24.4.1:35338.service: Deactivated successfully. May 13 08:29:38.329506 systemd[1]: session-15.scope: Deactivated successfully. May 13 08:29:38.308000 audit[7039]: CRED_DISP pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:38.343676 kernel: audit: type=1104 audit(1747124978.308:499): pid=7039 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:38.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.25:22-172.24.4.1:35338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:38.344333 systemd-logind[1240]: Session 15 logged out. Waiting for processes to exit. May 13 08:29:38.358682 kernel: audit: type=1131 audit(1747124978.326:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.25:22-172.24.4.1:35338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:38.359444 systemd-logind[1240]: Removed session 15. May 13 08:29:43.279147 systemd[1]: Started sshd@15-172.24.4.25:22-172.24.4.1:35340.service. May 13 08:29:43.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.25:22-172.24.4.1:35340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:43.295723 kernel: audit: type=1130 audit(1747124983.279:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.25:22-172.24.4.1:35340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:44.619000 audit[7071]: USER_ACCT pid=7071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.624607 sshd[7071]: Accepted publickey for core from 172.24.4.1 port 35340 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:44.636639 kernel: audit: type=1101 audit(1747124984.619:502): pid=7071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.635000 audit[7071]: CRED_ACQ pid=7071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.638418 sshd[7071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:44.651675 kernel: audit: type=1103 audit(1747124984.635:503): pid=7071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.662892 kernel: audit: type=1006 audit(1747124984.635:504): pid=7071 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 13 08:29:44.635000 audit[7071]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf5dc6820 a2=3 a3=0 items=0 ppid=1 pid=7071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:44.679897 kernel: audit: type=1300 audit(1747124984.635:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf5dc6820 a2=3 a3=0 items=0 ppid=1 pid=7071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:44.681070 systemd-logind[1240]: New session 16 of user core. May 13 08:29:44.682345 systemd[1]: Started session-16.scope. May 13 08:29:44.635000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:44.693299 kernel: audit: type=1327 audit(1747124984.635:504): proctitle=737368643A20636F7265205B707269765D May 13 08:29:44.708000 audit[7071]: USER_START pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.727690 kernel: audit: type=1105 audit(1747124984.708:505): pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.727933 kernel: audit: type=1103 audit(1747124984.713:506): pid=7076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:44.713000 audit[7076]: CRED_ACQ pid=7076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:45.445118 sshd[7071]: pam_unix(sshd:session): session closed for user core May 13 08:29:45.466766 kernel: audit: type=1106 audit(1747124985.445:507): pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:45.445000 audit[7071]: USER_END pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:45.465802 systemd[1]: sshd@15-172.24.4.25:22-172.24.4.1:35340.service: Deactivated successfully. May 13 08:29:45.468162 systemd[1]: session-16.scope: Deactivated successfully. May 13 08:29:45.446000 audit[7071]: CRED_DISP pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:45.483147 kernel: audit: type=1104 audit(1747124985.446:508): pid=7071 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:45.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.25:22-172.24.4.1:35340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:45.483474 systemd-logind[1240]: Session 16 logged out. Waiting for processes to exit. May 13 08:29:45.492862 systemd-logind[1240]: Removed session 16. May 13 08:29:50.467427 systemd[1]: Started sshd@16-172.24.4.25:22-172.24.4.1:41504.service. May 13 08:29:50.482803 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:29:50.483240 kernel: audit: type=1130 audit(1747124990.470:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.25:22-172.24.4.1:41504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:50.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.25:22-172.24.4.1:41504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:51.729000 audit[7091]: USER_ACCT pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.733398 sshd[7091]: Accepted publickey for core from 172.24.4.1 port 41504 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:51.747740 kernel: audit: type=1101 audit(1747124991.729:511): pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.748114 kernel: audit: type=1103 audit(1747124991.745:512): pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.745000 audit[7091]: CRED_ACQ pid=7091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.747187 sshd[7091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:51.779711 kernel: audit: type=1006 audit(1747124991.745:513): pid=7091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 13 08:29:51.745000 audit[7091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd64ec190 a2=3 a3=0 items=0 ppid=1 pid=7091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:51.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:51.804410 kernel: audit: type=1300 audit(1747124991.745:513): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd64ec190 a2=3 a3=0 items=0 ppid=1 pid=7091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:51.804720 kernel: audit: type=1327 audit(1747124991.745:513): proctitle=737368643A20636F7265205B707269765D May 13 08:29:51.821060 systemd-logind[1240]: New session 17 of user core. May 13 08:29:51.823373 systemd[1]: Started session-17.scope. May 13 08:29:51.850000 audit[7091]: USER_START pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.858595 kernel: audit: type=1105 audit(1747124991.850:514): pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.860000 audit[7094]: CRED_ACQ pid=7094 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:51.867088 kernel: audit: type=1103 audit(1747124991.860:515): pid=7094 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:52.459052 sshd[7091]: pam_unix(sshd:session): session closed for user core May 13 08:29:52.466000 audit[7091]: USER_END pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:52.486675 kernel: audit: type=1106 audit(1747124992.466:516): pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:52.487210 systemd[1]: sshd@16-172.24.4.25:22-172.24.4.1:41504.service: Deactivated successfully. May 13 08:29:52.493376 systemd[1]: session-17.scope: Deactivated successfully. May 13 08:29:52.495225 systemd-logind[1240]: Session 17 logged out. Waiting for processes to exit. May 13 08:29:52.498523 systemd-logind[1240]: Removed session 17. May 13 08:29:52.466000 audit[7091]: CRED_DISP pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:52.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.25:22-172.24.4.1:41504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:52.507616 kernel: audit: type=1104 audit(1747124992.466:517): pid=7091 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:57.462420 systemd[1]: Started sshd@17-172.24.4.25:22-172.24.4.1:51324.service. May 13 08:29:57.483446 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:29:57.483828 kernel: audit: type=1130 audit(1747124997.462:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.25:22-172.24.4.1:51324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:57.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.25:22-172.24.4.1:51324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:29:58.749000 audit[7125]: USER_ACCT pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.765714 kernel: audit: type=1101 audit(1747124998.749:520): pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.765858 sshd[7125]: Accepted publickey for core from 172.24.4.1 port 51324 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:58.766656 sshd[7125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:58.764000 audit[7125]: CRED_ACQ pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.786538 systemd[1]: Started session-18.scope. May 13 08:29:58.788634 systemd-logind[1240]: New session 18 of user core. May 13 08:29:58.789717 kernel: audit: type=1103 audit(1747124998.764:521): pid=7125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.789799 kernel: audit: type=1006 audit(1747124998.764:522): pid=7125 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 13 08:29:58.764000 audit[7125]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff73274ff0 a2=3 a3=0 items=0 ppid=1 pid=7125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:58.803874 kernel: audit: type=1300 audit(1747124998.764:522): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff73274ff0 a2=3 a3=0 items=0 ppid=1 pid=7125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:29:58.764000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:29:58.807681 kernel: audit: type=1327 audit(1747124998.764:522): proctitle=737368643A20636F7265205B707269765D May 13 08:29:58.795000 audit[7125]: USER_START pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.807000 audit[7128]: CRED_ACQ pid=7128 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.821038 kernel: audit: type=1105 audit(1747124998.795:523): pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:58.821152 kernel: audit: type=1103 audit(1747124998.807:524): pid=7128 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:59.649675 kernel: audit: type=1106 audit(1747124999.631:525): pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:59.631000 audit[7125]: USER_END pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:59.631202 sshd[7125]: pam_unix(sshd:session): session closed for user core May 13 08:29:59.631000 audit[7125]: CRED_DISP pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:59.651123 systemd-logind[1240]: Session 18 logged out. Waiting for processes to exit. May 13 08:29:59.652712 systemd[1]: sshd@17-172.24.4.25:22-172.24.4.1:51324.service: Deactivated successfully. May 13 08:29:59.654131 systemd[1]: session-18.scope: Deactivated successfully. May 13 08:29:59.656152 systemd-logind[1240]: Removed session 18. May 13 08:29:59.664808 kernel: audit: type=1104 audit(1747124999.631:526): pid=7125 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:29:59.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.25:22-172.24.4.1:51324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:04.691852 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:30:04.693205 kernel: audit: type=1130 audit(1747125004.673:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.25:22-172.24.4.1:40482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:04.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.25:22-172.24.4.1:40482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:04.673342 systemd[1]: Started sshd@18-172.24.4.25:22-172.24.4.1:40482.service. May 13 08:30:06.127000 audit[7139]: USER_ACCT pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.129060 sshd[7139]: Accepted publickey for core from 172.24.4.1 port 40482 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:06.143742 kernel: audit: type=1101 audit(1747125006.127:529): pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.145336 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:06.143000 audit[7139]: CRED_ACQ pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.165784 kernel: audit: type=1103 audit(1747125006.143:530): pid=7139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.178267 kernel: audit: type=1006 audit(1747125006.143:531): pid=7139 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 May 13 08:30:06.178616 kernel: audit: type=1300 audit(1747125006.143:531): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98e16130 a2=3 a3=0 items=0 ppid=1 pid=7139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:06.143000 audit[7139]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98e16130 a2=3 a3=0 items=0 ppid=1 pid=7139 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:06.143000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:06.203861 systemd-logind[1240]: New session 19 of user core. May 13 08:30:06.206848 kernel: audit: type=1327 audit(1747125006.143:531): proctitle=737368643A20636F7265205B707269765D May 13 08:30:06.205300 systemd[1]: Started session-19.scope. May 13 08:30:06.226000 audit[7139]: USER_START pid=7139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.234674 kernel: audit: type=1105 audit(1747125006.226:532): pid=7139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.234000 audit[7142]: CRED_ACQ pid=7142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:06.241913 kernel: audit: type=1103 audit(1747125006.234:533): pid=7142 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:07.032095 sshd[7139]: pam_unix(sshd:session): session closed for user core May 13 08:30:07.037114 systemd[1]: Started sshd@19-172.24.4.25:22-172.24.4.1:40496.service. May 13 08:30:07.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.25:22-172.24.4.1:40496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:07.053473 systemd[1]: sshd@18-172.24.4.25:22-172.24.4.1:40482.service: Deactivated successfully. May 13 08:30:07.057160 systemd-logind[1240]: Session 19 logged out. Waiting for processes to exit. May 13 08:30:07.061869 systemd[1]: session-19.scope: Deactivated successfully. May 13 08:30:07.066237 systemd-logind[1240]: Removed session 19. May 13 08:30:07.068346 kernel: audit: type=1130 audit(1747125007.037:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.25:22-172.24.4.1:40496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:07.045000 audit[7139]: USER_END pid=7139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:07.096756 kernel: audit: type=1106 audit(1747125007.045:535): pid=7139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:07.046000 audit[7139]: CRED_DISP pid=7139 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:07.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.25:22-172.24.4.1:40482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:07.792564 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.VW8b1h.mount: Deactivated successfully. May 13 08:30:08.483622 sshd[7149]: Accepted publickey for core from 172.24.4.1 port 40496 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:08.481000 audit[7149]: USER_ACCT pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:08.485000 audit[7149]: CRED_ACQ pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:08.485000 audit[7149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf47de9c0 a2=3 a3=0 items=0 ppid=1 pid=7149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:08.485000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:08.488059 sshd[7149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:08.501357 systemd[1]: Started session-20.scope. May 13 08:30:08.501907 systemd-logind[1240]: New session 20 of user core. May 13 08:30:08.515000 audit[7149]: USER_START pid=7149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:08.520000 audit[7193]: CRED_ACQ pid=7193 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:10.428949 sshd[7149]: pam_unix(sshd:session): session closed for user core May 13 08:30:10.445431 kernel: kauditd_printk_skb: 9 callbacks suppressed May 13 08:30:10.445843 kernel: audit: type=1106 audit(1747125010.431:543): pid=7149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:10.431000 audit[7149]: USER_END pid=7149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:10.433319 systemd[1]: Started sshd@20-172.24.4.25:22-172.24.4.1:40504.service. May 13 08:30:10.431000 audit[7149]: CRED_DISP pid=7149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:10.435353 systemd[1]: sshd@19-172.24.4.25:22-172.24.4.1:40496.service: Deactivated successfully. May 13 08:30:10.443823 systemd[1]: session-20.scope: Deactivated successfully. May 13 08:30:10.456684 kernel: audit: type=1104 audit(1747125010.431:544): pid=7149 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:10.456995 systemd-logind[1240]: Session 20 logged out. Waiting for processes to exit. May 13 08:30:10.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.25:22-172.24.4.1:40504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:10.473433 systemd-logind[1240]: Removed session 20. May 13 08:30:10.473851 kernel: audit: type=1130 audit(1747125010.432:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.25:22-172.24.4.1:40504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:10.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.25:22-172.24.4.1:40496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:10.481739 kernel: audit: type=1131 audit(1747125010.435:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.25:22-172.24.4.1:40496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:11.961000 audit[7199]: USER_ACCT pid=7199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:11.963431 sshd[7199]: Accepted publickey for core from 172.24.4.1 port 40504 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:11.970623 kernel: audit: type=1101 audit(1747125011.961:547): pid=7199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:11.972176 sshd[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:11.970000 audit[7199]: CRED_ACQ pid=7199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:11.979615 kernel: audit: type=1103 audit(1747125011.970:548): pid=7199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:11.985028 systemd[1]: Started session-21.scope. May 13 08:30:11.985427 systemd-logind[1240]: New session 21 of user core. May 13 08:30:11.991630 kernel: audit: type=1006 audit(1747125011.970:549): pid=7199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 13 08:30:11.970000 audit[7199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef2b8ae0 a2=3 a3=0 items=0 ppid=1 pid=7199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:12.005675 kernel: audit: type=1300 audit(1747125011.970:549): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef2b8ae0 a2=3 a3=0 items=0 ppid=1 pid=7199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:11.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:11.994000 audit[7199]: USER_START pid=7199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:12.023890 kernel: audit: type=1327 audit(1747125011.970:549): proctitle=737368643A20636F7265205B707269765D May 13 08:30:12.024007 kernel: audit: type=1105 audit(1747125011.994:550): pid=7199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:11.996000 audit[7205]: CRED_ACQ pid=7205 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:15.581000 audit[7216]: NETFILTER_CFG table=filter:146 family=2 entries=20 op=nft_register_rule pid=7216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.586220 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:30:15.586441 kernel: audit: type=1325 audit(1747125015.581:552): table=filter:146 family=2 entries=20 op=nft_register_rule pid=7216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.592940 kernel: audit: type=1300 audit(1747125015.581:552): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffdbacbba0 a2=0 a3=7fffdbacbb8c items=0 ppid=2353 pid=7216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.581000 audit[7216]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffdbacbba0 a2=0 a3=7fffdbacbb8c items=0 ppid=2353 pid=7216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.603053 kernel: audit: type=1327 audit(1747125015.581:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.602000 audit[7216]: NETFILTER_CFG table=nat:147 family=2 entries=22 op=nft_register_rule pid=7216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.608606 kernel: audit: type=1325 audit(1747125015.602:553): table=nat:147 family=2 entries=22 op=nft_register_rule pid=7216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.608714 kernel: audit: type=1300 audit(1747125015.602:553): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffdbacbba0 a2=0 a3=0 items=0 ppid=2353 pid=7216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.602000 audit[7216]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffdbacbba0 a2=0 a3=0 items=0 ppid=2353 pid=7216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.621661 kernel: audit: type=1327 audit(1747125015.602:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.638000 audit[7218]: NETFILTER_CFG table=filter:148 family=2 entries=32 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.638000 audit[7218]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff96f18ca0 a2=0 a3=7fff96f18c8c items=0 ppid=2353 pid=7218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.652853 kernel: audit: type=1325 audit(1747125015.638:554): table=filter:148 family=2 entries=32 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.653061 kernel: audit: type=1300 audit(1747125015.638:554): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff96f18ca0 a2=0 a3=7fff96f18c8c items=0 ppid=2353 pid=7218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.657650 kernel: audit: type=1327 audit(1747125015.638:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.660000 audit[7218]: NETFILTER_CFG table=nat:149 family=2 entries=22 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.660000 audit[7218]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff96f18ca0 a2=0 a3=0 items=0 ppid=2353 pid=7218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:15.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:15.668655 kernel: audit: type=1325 audit(1747125015.660:555): table=nat:149 family=2 entries=22 op=nft_register_rule pid=7218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:15.746919 sshd[7199]: pam_unix(sshd:session): session closed for user core May 13 08:30:15.755000 audit[7199]: USER_END pid=7199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:15.756000 audit[7199]: CRED_DISP pid=7199 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:15.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.25:22-172.24.4.1:47702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:15.758279 systemd[1]: Started sshd@21-172.24.4.25:22-172.24.4.1:47702.service. May 13 08:30:15.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.25:22-172.24.4.1:40504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:15.761465 systemd[1]: sshd@20-172.24.4.25:22-172.24.4.1:40504.service: Deactivated successfully. May 13 08:30:15.770153 systemd[1]: session-21.scope: Deactivated successfully. May 13 08:30:15.772009 systemd-logind[1240]: Session 21 logged out. Waiting for processes to exit. May 13 08:30:15.773999 systemd-logind[1240]: Removed session 21. May 13 08:30:17.122000 audit[7219]: USER_ACCT pid=7219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:17.126301 sshd[7219]: Accepted publickey for core from 172.24.4.1 port 47702 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:17.126000 audit[7219]: CRED_ACQ pid=7219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:17.126000 audit[7219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce7a32c00 a2=3 a3=0 items=0 ppid=1 pid=7219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:17.126000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:17.130235 sshd[7219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:17.148386 systemd[1]: Started session-22.scope. May 13 08:30:17.149058 systemd-logind[1240]: New session 22 of user core. May 13 08:30:17.165000 audit[7219]: USER_START pid=7219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:17.172000 audit[7224]: CRED_ACQ pid=7224 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:18.244228 sshd[7219]: pam_unix(sshd:session): session closed for user core May 13 08:30:18.244000 audit[7219]: USER_END pid=7219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:18.244000 audit[7219]: CRED_DISP pid=7219 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:18.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.25:22-172.24.4.1:47718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:18.246730 systemd[1]: Started sshd@22-172.24.4.25:22-172.24.4.1:47718.service. May 13 08:30:18.248224 systemd[1]: sshd@21-172.24.4.25:22-172.24.4.1:47702.service: Deactivated successfully. May 13 08:30:18.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.25:22-172.24.4.1:47702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:18.252734 systemd-logind[1240]: Session 22 logged out. Waiting for processes to exit. May 13 08:30:18.257954 systemd[1]: session-22.scope: Deactivated successfully. May 13 08:30:18.261961 systemd-logind[1240]: Removed session 22. May 13 08:30:19.751000 audit[7230]: USER_ACCT pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:19.752652 sshd[7230]: Accepted publickey for core from 172.24.4.1 port 47718 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:19.755000 audit[7230]: CRED_ACQ pid=7230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:19.755000 audit[7230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee42c0e40 a2=3 a3=0 items=0 ppid=1 pid=7230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:19.755000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:19.756807 sshd[7230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:19.771424 systemd-logind[1240]: New session 23 of user core. May 13 08:30:19.771787 systemd[1]: Started session-23.scope. May 13 08:30:19.789000 audit[7230]: USER_START pid=7230 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:19.794000 audit[7235]: CRED_ACQ pid=7235 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:20.569366 sshd[7230]: pam_unix(sshd:session): session closed for user core May 13 08:30:20.570000 audit[7230]: USER_END pid=7230 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:20.570000 audit[7230]: CRED_DISP pid=7230 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:20.574150 systemd[1]: sshd@22-172.24.4.25:22-172.24.4.1:47718.service: Deactivated successfully. May 13 08:30:20.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.25:22-172.24.4.1:47718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:20.576561 systemd[1]: session-23.scope: Deactivated successfully. May 13 08:30:20.576847 systemd-logind[1240]: Session 23 logged out. Waiting for processes to exit. May 13 08:30:20.580188 systemd-logind[1240]: Removed session 23. May 13 08:30:25.465646 kernel: kauditd_printk_skb: 27 callbacks suppressed May 13 08:30:25.465866 kernel: audit: type=1325 audit(1747125025.461:577): table=filter:150 family=2 entries=20 op=nft_register_rule pid=7272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:25.461000 audit[7272]: NETFILTER_CFG table=filter:150 family=2 entries=20 op=nft_register_rule pid=7272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:25.461000 audit[7272]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffb6e6b830 a2=0 a3=7fffb6e6b81c items=0 ppid=2353 pid=7272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:25.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:25.482402 kernel: audit: type=1300 audit(1747125025.461:577): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffb6e6b830 a2=0 a3=7fffb6e6b81c items=0 ppid=2353 pid=7272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:25.482493 kernel: audit: type=1327 audit(1747125025.461:577): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:25.484000 audit[7272]: NETFILTER_CFG table=nat:151 family=2 entries=106 op=nft_register_chain pid=7272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:25.484000 audit[7272]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffb6e6b830 a2=0 a3=7fffb6e6b81c items=0 ppid=2353 pid=7272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:25.498548 kernel: audit: type=1325 audit(1747125025.484:578): table=nat:151 family=2 entries=106 op=nft_register_chain pid=7272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 13 08:30:25.498723 kernel: audit: type=1300 audit(1747125025.484:578): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffb6e6b830 a2=0 a3=7fffb6e6b81c items=0 ppid=2353 pid=7272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:25.498792 kernel: audit: type=1327 audit(1747125025.484:578): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:25.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 13 08:30:25.582990 systemd[1]: Started sshd@23-172.24.4.25:22-172.24.4.1:58984.service. May 13 08:30:25.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.25:22-172.24.4.1:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:25.601730 kernel: audit: type=1130 audit(1747125025.584:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.25:22-172.24.4.1:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:26.737000 audit[7274]: USER_ACCT pid=7274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:26.738965 sshd[7274]: Accepted publickey for core from 172.24.4.1 port 58984 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:26.749712 kernel: audit: type=1101 audit(1747125026.737:580): pid=7274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:26.752119 sshd[7274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:26.750000 audit[7274]: CRED_ACQ pid=7274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:26.771785 kernel: audit: type=1103 audit(1747125026.750:581): pid=7274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:26.771926 kernel: audit: type=1006 audit(1747125026.750:582): pid=7274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 13 08:30:26.750000 audit[7274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0d88fd0 a2=3 a3=0 items=0 ppid=1 pid=7274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:26.750000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:26.775220 systemd[1]: Started session-24.scope. May 13 08:30:26.775487 systemd-logind[1240]: New session 24 of user core. May 13 08:30:26.784000 audit[7274]: USER_START pid=7274 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:26.786000 audit[7277]: CRED_ACQ pid=7277 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:27.437860 sshd[7274]: pam_unix(sshd:session): session closed for user core May 13 08:30:27.439000 audit[7274]: USER_END pid=7274 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:27.440000 audit[7274]: CRED_DISP pid=7274 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:27.443590 systemd[1]: sshd@23-172.24.4.25:22-172.24.4.1:58984.service: Deactivated successfully. May 13 08:30:27.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.25:22-172.24.4.1:58984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:27.445257 systemd[1]: session-24.scope: Deactivated successfully. May 13 08:30:27.445943 systemd-logind[1240]: Session 24 logged out. Waiting for processes to exit. May 13 08:30:27.447010 systemd-logind[1240]: Removed session 24. May 13 08:30:32.463280 systemd[1]: Started sshd@24-172.24.4.25:22-172.24.4.1:59000.service. May 13 08:30:32.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.25:22-172.24.4.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:32.468165 kernel: kauditd_printk_skb: 7 callbacks suppressed May 13 08:30:32.468441 kernel: audit: type=1130 audit(1747125032.464:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.25:22-172.24.4.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:33.547000 audit[7289]: USER_ACCT pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.548741 sshd[7289]: Accepted publickey for core from 172.24.4.1 port 59000 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:33.561696 kernel: audit: type=1101 audit(1747125033.547:589): pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.564908 sshd[7289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:33.563000 audit[7289]: CRED_ACQ pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.577705 kernel: audit: type=1103 audit(1747125033.563:590): pid=7289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.591626 kernel: audit: type=1006 audit(1747125033.563:591): pid=7289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 13 08:30:33.563000 audit[7289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd010aee60 a2=3 a3=0 items=0 ppid=1 pid=7289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:33.605214 kernel: audit: type=1300 audit(1747125033.563:591): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd010aee60 a2=3 a3=0 items=0 ppid=1 pid=7289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:33.605387 kernel: audit: type=1327 audit(1747125033.563:591): proctitle=737368643A20636F7265205B707269765D May 13 08:30:33.563000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:33.612906 systemd-logind[1240]: New session 25 of user core. May 13 08:30:33.616886 systemd[1]: Started session-25.scope. May 13 08:30:33.626000 audit[7289]: USER_START pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.640271 kernel: audit: type=1105 audit(1747125033.626:592): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.640349 kernel: audit: type=1103 audit(1747125033.629:593): pid=7292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:33.629000 audit[7292]: CRED_ACQ pid=7292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:34.208501 sshd[7289]: pam_unix(sshd:session): session closed for user core May 13 08:30:34.210000 audit[7289]: USER_END pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:34.220870 kernel: audit: type=1106 audit(1747125034.210:594): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:34.214230 systemd[1]: sshd@24-172.24.4.25:22-172.24.4.1:59000.service: Deactivated successfully. May 13 08:30:34.217644 systemd[1]: session-25.scope: Deactivated successfully. May 13 08:30:34.223611 systemd-logind[1240]: Session 25 logged out. Waiting for processes to exit. May 13 08:30:34.210000 audit[7289]: CRED_DISP pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:34.238038 systemd-logind[1240]: Removed session 25. May 13 08:30:34.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.25:22-172.24.4.1:59000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:34.244638 kernel: audit: type=1104 audit(1747125034.210:595): pid=7289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:37.798101 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.zusPsO.mount: Deactivated successfully. May 13 08:30:39.221010 systemd[1]: Started sshd@25-172.24.4.25:22-172.24.4.1:36708.service. May 13 08:30:39.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.25:22-172.24.4.1:36708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:39.225532 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:30:39.225791 kernel: audit: type=1130 audit(1747125039.220:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.25:22-172.24.4.1:36708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:40.531000 audit[7321]: USER_ACCT pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.533460 sshd[7321]: Accepted publickey for core from 172.24.4.1 port 36708 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:40.548716 kernel: audit: type=1101 audit(1747125040.531:598): pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.547000 audit[7321]: CRED_ACQ pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.550789 sshd[7321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:40.564725 kernel: audit: type=1103 audit(1747125040.547:599): pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.576086 kernel: audit: type=1006 audit(1747125040.548:600): pid=7321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 13 08:30:40.548000 audit[7321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc22f4b850 a2=3 a3=0 items=0 ppid=1 pid=7321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:40.599722 kernel: audit: type=1300 audit(1747125040.548:600): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc22f4b850 a2=3 a3=0 items=0 ppid=1 pid=7321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:40.548000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:40.603701 kernel: audit: type=1327 audit(1747125040.548:600): proctitle=737368643A20636F7265205B707269765D May 13 08:30:40.608433 systemd-logind[1240]: New session 26 of user core. May 13 08:30:40.614091 systemd[1]: Started session-26.scope. May 13 08:30:40.632000 audit[7321]: USER_START pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.641780 kernel: audit: type=1105 audit(1747125040.632:601): pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.649486 kernel: audit: type=1103 audit(1747125040.640:602): pid=7324 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:40.640000 audit[7324]: CRED_ACQ pid=7324 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:41.275085 sshd[7321]: pam_unix(sshd:session): session closed for user core May 13 08:30:41.276000 audit[7321]: USER_END pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:41.285791 kernel: audit: type=1106 audit(1747125041.276:603): pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:41.285939 systemd[1]: sshd@25-172.24.4.25:22-172.24.4.1:36708.service: Deactivated successfully. May 13 08:30:41.288028 systemd-logind[1240]: Session 26 logged out. Waiting for processes to exit. May 13 08:30:41.289127 systemd[1]: session-26.scope: Deactivated successfully. May 13 08:30:41.277000 audit[7321]: CRED_DISP pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:41.296308 systemd-logind[1240]: Removed session 26. May 13 08:30:41.296741 kernel: audit: type=1104 audit(1747125041.277:604): pid=7321 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:41.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.25:22-172.24.4.1:36708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:46.292646 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:30:46.292959 kernel: audit: type=1130 audit(1747125046.290:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.24.4.25:22-172.24.4.1:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:46.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.24.4.25:22-172.24.4.1:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:46.291126 systemd[1]: Started sshd@26-172.24.4.25:22-172.24.4.1:48196.service. May 13 08:30:47.576000 audit[7351]: USER_ACCT pid=7351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.578518 sshd[7351]: Accepted publickey for core from 172.24.4.1 port 48196 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:47.583000 audit[7351]: CRED_ACQ pid=7351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.585881 sshd[7351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:47.591775 kernel: audit: type=1101 audit(1747125047.576:607): pid=7351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.591878 kernel: audit: type=1103 audit(1747125047.583:608): pid=7351 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.596587 kernel: audit: type=1006 audit(1747125047.583:609): pid=7351 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 May 13 08:30:47.583000 audit[7351]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfc6e87c0 a2=3 a3=0 items=0 ppid=1 pid=7351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:47.583000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:47.606770 kernel: audit: type=1300 audit(1747125047.583:609): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfc6e87c0 a2=3 a3=0 items=0 ppid=1 pid=7351 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:47.606943 kernel: audit: type=1327 audit(1747125047.583:609): proctitle=737368643A20636F7265205B707269765D May 13 08:30:47.611654 systemd-logind[1240]: New session 27 of user core. May 13 08:30:47.613908 systemd[1]: Started session-27.scope. May 13 08:30:47.626000 audit[7351]: USER_START pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.634741 kernel: audit: type=1105 audit(1747125047.626:610): pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.634000 audit[7354]: CRED_ACQ pid=7354 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:47.643658 kernel: audit: type=1103 audit(1747125047.634:611): pid=7354 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:48.395451 sshd[7351]: pam_unix(sshd:session): session closed for user core May 13 08:30:48.400000 audit[7351]: USER_END pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:48.418668 kernel: audit: type=1106 audit(1747125048.400:612): pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:48.419936 systemd[1]: sshd@26-172.24.4.25:22-172.24.4.1:48196.service: Deactivated successfully. May 13 08:30:48.401000 audit[7351]: CRED_DISP pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:48.446707 kernel: audit: type=1104 audit(1747125048.401:613): pid=7351 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:48.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.24.4.25:22-172.24.4.1:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:48.465422 systemd-logind[1240]: Session 27 logged out. Waiting for processes to exit. May 13 08:30:48.471981 systemd[1]: session-27.scope: Deactivated successfully. May 13 08:30:48.476830 systemd-logind[1240]: Removed session 27. May 13 08:30:53.402890 systemd[1]: Started sshd@27-172.24.4.25:22-172.24.4.1:48202.service. May 13 08:30:53.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.24.4.25:22-172.24.4.1:48202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:53.405564 kernel: kauditd_printk_skb: 1 callbacks suppressed May 13 08:30:53.405897 kernel: audit: type=1130 audit(1747125053.403:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.24.4.25:22-172.24.4.1:48202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:30:54.766000 audit[7363]: USER_ACCT pid=7363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.770520 sshd[7363]: Accepted publickey for core from 172.24.4.1 port 48202 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:54.774793 kernel: audit: type=1101 audit(1747125054.766:616): pid=7363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.775751 sshd[7363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:54.773000 audit[7363]: CRED_ACQ pid=7363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.784799 kernel: audit: type=1103 audit(1747125054.773:617): pid=7363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.795712 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.Zmi1bq.mount: Deactivated successfully. May 13 08:30:54.773000 audit[7363]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b31bdd0 a2=3 a3=0 items=0 ppid=1 pid=7363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:54.809639 kernel: audit: type=1006 audit(1747125054.773:618): pid=7363 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 May 13 08:30:54.809833 kernel: audit: type=1300 audit(1747125054.773:618): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b31bdd0 a2=3 a3=0 items=0 ppid=1 pid=7363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:30:54.809927 kernel: audit: type=1327 audit(1747125054.773:618): proctitle=737368643A20636F7265205B707269765D May 13 08:30:54.773000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 13 08:30:54.820707 systemd[1]: Started session-28.scope. May 13 08:30:54.821138 systemd-logind[1240]: New session 28 of user core. May 13 08:30:54.833000 audit[7363]: USER_START pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.847064 kernel: audit: type=1105 audit(1747125054.833:619): pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.847184 kernel: audit: type=1103 audit(1747125054.841:620): pid=7377 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:54.841000 audit[7377]: CRED_ACQ pid=7377 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:55.469878 sshd[7363]: pam_unix(sshd:session): session closed for user core May 13 08:30:55.472000 audit[7363]: USER_END pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:55.489640 kernel: audit: type=1106 audit(1747125055.472:621): pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:55.491362 systemd[1]: sshd@27-172.24.4.25:22-172.24.4.1:48202.service: Deactivated successfully. May 13 08:30:55.472000 audit[7363]: CRED_DISP pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:55.508698 kernel: audit: type=1104 audit(1747125055.472:622): pid=7363 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' May 13 08:30:55.507711 systemd[1]: session-28.scope: Deactivated successfully. May 13 08:30:55.508106 systemd-logind[1240]: Session 28 logged out. Waiting for processes to exit. May 13 08:30:55.512331 systemd-logind[1240]: Removed session 28. May 13 08:30:55.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.24.4.25:22-172.24.4.1:48202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:31:24.809131 systemd[1]: run-containerd-runc-k8s.io-a90f3661e627bd325a93274fe3cef9643246646fe349a0297a277446e2e70a84-runc.nJHX7R.mount: Deactivated successfully. May 13 08:31:37.789051 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.Sc0O0J.mount: Deactivated successfully. May 13 08:32:07.813902 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.QbsdOm.mount: Deactivated successfully. May 13 08:32:07.895190 systemd[1]: run-containerd-runc-k8s.io-7236bf575f5f50bf48b4084738836f5c51b899deb364d05c255c59db86b56a07-runc.Qz3qbU.mount: Deactivated successfully.