May 17 00:41:25.975237 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:41:25.975272 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:25.975303 kernel: BIOS-provided physical RAM map: May 17 00:41:25.975314 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:41:25.975323 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:41:25.975335 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:41:25.975347 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:41:25.975358 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:41:25.975374 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:41:25.975385 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:41:25.975397 kernel: NX (Execute Disable) protection: active May 17 00:41:25.975408 kernel: SMBIOS 2.8 present. May 17 00:41:25.975420 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:41:25.975432 kernel: Hypervisor detected: KVM May 17 00:41:25.975444 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:41:25.975454 kernel: kvm-clock: cpu 0, msr 6919a001, primary cpu clock May 17 00:41:25.975461 kernel: kvm-clock: using sched offset of 3913524981 cycles May 17 00:41:25.975469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:41:25.975479 kernel: tsc: Detected 2494.134 MHz processor May 17 00:41:25.975487 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:41:25.975495 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:41:25.975502 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:41:25.975509 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:41:25.975519 kernel: ACPI: Early table checksum verification disabled May 17 00:41:25.975526 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:41:25.975534 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975541 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975551 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975561 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:41:25.975571 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975578 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975585 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975596 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.975603 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:41:25.975610 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:41:25.975617 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:41:25.975625 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:41:25.975632 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:41:25.975639 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:41:25.975646 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:41:25.975661 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:41:25.975668 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:41:25.975676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:41:25.975684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:41:25.975692 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:41:25.975699 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:41:25.975710 kernel: Zone ranges: May 17 00:41:25.975717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:41:25.975725 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:41:25.975733 kernel: Normal empty May 17 00:41:25.975741 kernel: Movable zone start for each node May 17 00:41:25.975748 kernel: Early memory node ranges May 17 00:41:25.975756 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:41:25.975764 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:41:25.975772 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:41:25.975782 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:41:25.975793 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:41:25.975801 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:41:25.975808 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:41:25.975816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:41:25.975824 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:41:25.975832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:41:25.975839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:41:25.975847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:41:25.975857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:41:25.975868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:41:25.975876 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:41:25.975886 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:41:25.975897 kernel: TSC deadline timer available May 17 00:41:25.975909 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:41:25.975918 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:41:25.975926 kernel: Booting paravirtualized kernel on KVM May 17 00:41:25.975934 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:41:25.975944 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:41:25.975952 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:41:25.975960 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:41:25.975968 kernel: pcpu-alloc: [0] 0 1 May 17 00:41:25.975976 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 May 17 00:41:25.975984 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:41:25.975992 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:41:25.975999 kernel: Policy zone: DMA32 May 17 00:41:25.976010 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:25.976026 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:41:25.976037 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:41:25.976049 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:41:25.976059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:41:25.976067 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) May 17 00:41:25.976075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:41:25.976083 kernel: Kernel/User page tables isolation: enabled May 17 00:41:25.976090 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:41:25.976101 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:41:25.976109 kernel: rcu: Hierarchical RCU implementation. May 17 00:41:25.976118 kernel: rcu: RCU event tracing is enabled. May 17 00:41:25.976126 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:41:25.976134 kernel: Rude variant of Tasks RCU enabled. May 17 00:41:25.976142 kernel: Tracing variant of Tasks RCU enabled. May 17 00:41:25.976150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:41:25.976159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:41:25.976171 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:41:25.976187 kernel: random: crng init done May 17 00:41:25.976195 kernel: Console: colour VGA+ 80x25 May 17 00:41:25.976203 kernel: printk: console [tty0] enabled May 17 00:41:25.976212 kernel: printk: console [ttyS0] enabled May 17 00:41:25.976221 kernel: ACPI: Core revision 20210730 May 17 00:41:25.976230 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:41:25.976237 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:41:25.976245 kernel: x2apic enabled May 17 00:41:25.976253 kernel: Switched APIC routing to physical x2apic. May 17 00:41:25.976263 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:41:25.976276 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 17 00:41:25.983351 kernel: Calibrating delay loop (skipped) preset value.. 4988.26 BogoMIPS (lpj=2494134) May 17 00:41:25.983409 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:41:25.983424 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:41:25.983438 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:41:25.983451 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:41:25.983465 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:41:25.983479 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:41:25.983506 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:41:25.983531 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:41:25.983545 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:41:25.983563 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:41:25.983577 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:41:25.983592 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:41:25.983606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:41:25.983620 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:41:25.983634 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:41:25.983650 kernel: Freeing SMP alternatives memory: 32K May 17 00:41:25.983667 kernel: pid_max: default: 32768 minimum: 301 May 17 00:41:25.983679 kernel: LSM: Security Framework initializing May 17 00:41:25.983693 kernel: SELinux: Initializing. May 17 00:41:25.983706 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:25.983720 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:25.983732 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:41:25.983744 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:41:25.983760 kernel: signal: max sigframe size: 1776 May 17 00:41:25.983774 kernel: rcu: Hierarchical SRCU implementation. May 17 00:41:25.983788 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:41:25.983802 kernel: smp: Bringing up secondary CPUs ... May 17 00:41:25.983813 kernel: x86: Booting SMP configuration: May 17 00:41:25.983825 kernel: .... node #0, CPUs: #1 May 17 00:41:25.983836 kernel: kvm-clock: cpu 1, msr 6919a041, secondary cpu clock May 17 00:41:25.983848 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 May 17 00:41:25.983860 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:41:25.983877 kernel: smpboot: Max logical packages: 1 May 17 00:41:25.983889 kernel: smpboot: Total of 2 processors activated (9976.53 BogoMIPS) May 17 00:41:25.983903 kernel: devtmpfs: initialized May 17 00:41:25.983917 kernel: x86/mm: Memory block size: 128MB May 17 00:41:25.983929 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:41:25.983942 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:41:25.983956 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:41:25.983968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:41:25.983980 kernel: audit: initializing netlink subsys (disabled) May 17 00:41:25.983996 kernel: audit: type=2000 audit(1747442484.981:1): state=initialized audit_enabled=0 res=1 May 17 00:41:25.984010 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:41:25.984025 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:41:25.984039 kernel: cpuidle: using governor menu May 17 00:41:25.984053 kernel: ACPI: bus type PCI registered May 17 00:41:25.984067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:41:25.984088 kernel: dca service started, version 1.12.1 May 17 00:41:25.984101 kernel: PCI: Using configuration type 1 for base access May 17 00:41:25.984114 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:41:25.984134 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:41:25.984147 kernel: ACPI: Added _OSI(Module Device) May 17 00:41:25.984161 kernel: ACPI: Added _OSI(Processor Device) May 17 00:41:25.984175 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:41:25.984188 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:41:25.984201 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:41:25.984214 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:41:25.984228 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:41:25.984241 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:41:25.984258 kernel: ACPI: Interpreter enabled May 17 00:41:25.984273 kernel: ACPI: PM: (supports S0 S5) May 17 00:41:25.984287 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:41:25.984319 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:41:25.984333 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:41:25.984347 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:41:25.984655 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:41:25.984809 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:41:25.984839 kernel: acpiphp: Slot [3] registered May 17 00:41:25.984854 kernel: acpiphp: Slot [4] registered May 17 00:41:25.984868 kernel: acpiphp: Slot [5] registered May 17 00:41:25.984880 kernel: acpiphp: Slot [6] registered May 17 00:41:25.984894 kernel: acpiphp: Slot [7] registered May 17 00:41:25.984907 kernel: acpiphp: Slot [8] registered May 17 00:41:25.984920 kernel: acpiphp: Slot [9] registered May 17 00:41:25.984933 kernel: acpiphp: Slot [10] registered May 17 00:41:25.984945 kernel: acpiphp: Slot [11] registered May 17 00:41:25.984962 kernel: acpiphp: Slot [12] registered May 17 00:41:25.984975 kernel: acpiphp: Slot [13] registered May 17 00:41:25.984989 kernel: acpiphp: Slot [14] registered May 17 00:41:25.985003 kernel: acpiphp: Slot [15] registered May 17 00:41:25.985015 kernel: acpiphp: Slot [16] registered May 17 00:41:25.985028 kernel: acpiphp: Slot [17] registered May 17 00:41:25.985042 kernel: acpiphp: Slot [18] registered May 17 00:41:25.985055 kernel: acpiphp: Slot [19] registered May 17 00:41:25.985069 kernel: acpiphp: Slot [20] registered May 17 00:41:25.985086 kernel: acpiphp: Slot [21] registered May 17 00:41:25.985100 kernel: acpiphp: Slot [22] registered May 17 00:41:25.985113 kernel: acpiphp: Slot [23] registered May 17 00:41:25.985127 kernel: acpiphp: Slot [24] registered May 17 00:41:25.985140 kernel: acpiphp: Slot [25] registered May 17 00:41:25.985153 kernel: acpiphp: Slot [26] registered May 17 00:41:25.985167 kernel: acpiphp: Slot [27] registered May 17 00:41:25.985181 kernel: acpiphp: Slot [28] registered May 17 00:41:25.985194 kernel: acpiphp: Slot [29] registered May 17 00:41:25.985207 kernel: acpiphp: Slot [30] registered May 17 00:41:25.985225 kernel: acpiphp: Slot [31] registered May 17 00:41:25.985238 kernel: PCI host bridge to bus 0000:00 May 17 00:41:25.985461 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:41:25.985604 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:41:25.985760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:41:25.985879 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:41:25.986003 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:41:25.986135 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:41:25.986312 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:41:25.986475 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:41:25.986635 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:41:25.986784 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:41:25.986930 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:41:25.987081 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:41:25.987243 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:41:25.987402 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:41:25.987584 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:41:25.987727 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:41:25.987961 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:41:25.988105 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:41:25.988260 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:41:25.988428 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:41:25.988547 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:41:25.988672 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:41:25.988800 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:41:25.988929 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:41:25.989077 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:41:25.989206 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:25.989344 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:41:25.989446 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:41:25.989532 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:41:25.989656 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:25.989779 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:41:25.989877 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:41:25.989966 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:41:25.990081 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:41:25.990170 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:41:25.990268 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:41:25.995621 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:41:25.995860 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:25.996027 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:41:25.996177 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:41:25.996337 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:41:25.996501 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:25.996639 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:41:25.996770 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:41:25.996903 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:41:25.997080 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:41:25.997227 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:41:25.997385 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:41:25.997403 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:41:25.997418 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:41:25.997432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:41:25.997446 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:41:25.997468 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:41:25.997482 kernel: iommu: Default domain type: Translated May 17 00:41:25.997496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:41:25.997765 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:41:25.997912 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:41:25.998047 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:41:25.998065 kernel: vgaarb: loaded May 17 00:41:25.998079 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:41:25.998094 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:41:25.998115 kernel: PTP clock support registered May 17 00:41:25.998127 kernel: PCI: Using ACPI for IRQ routing May 17 00:41:25.998139 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:41:25.998151 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:41:25.998164 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:41:25.998176 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:41:25.998188 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:41:25.998200 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:41:25.998214 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:41:25.998230 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:41:25.998242 kernel: pnp: PnP ACPI init May 17 00:41:25.998253 kernel: pnp: PnP ACPI: found 4 devices May 17 00:41:25.998265 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:41:25.998278 kernel: NET: Registered PF_INET protocol family May 17 00:41:25.998309 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:41:25.998322 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:41:25.998335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:41:25.998353 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:41:25.998365 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:41:25.998377 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:41:25.998390 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:25.998403 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:25.998415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:41:25.998429 kernel: NET: Registered PF_XDP protocol family May 17 00:41:25.998604 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:41:25.998744 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:41:25.998871 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:41:25.998990 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:41:25.999104 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:41:25.999273 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:41:26.002609 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:41:26.002748 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:41:26.002769 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:41:26.002912 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 35064 usecs May 17 00:41:26.002935 kernel: PCI: CLS 0 bytes, default 64 May 17 00:41:26.002945 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:41:26.002954 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3946f721, max_idle_ns: 440795294991 ns May 17 00:41:26.002962 kernel: Initialise system trusted keyrings May 17 00:41:26.002975 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:41:26.002988 kernel: Key type asymmetric registered May 17 00:41:26.003001 kernel: Asymmetric key parser 'x509' registered May 17 00:41:26.003016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:41:26.003031 kernel: io scheduler mq-deadline registered May 17 00:41:26.003050 kernel: io scheduler kyber registered May 17 00:41:26.003064 kernel: io scheduler bfq registered May 17 00:41:26.003079 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:41:26.003092 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:41:26.003103 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:41:26.003114 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:41:26.003129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:41:26.003144 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:41:26.003159 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:41:26.003178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:41:26.003191 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:41:26.003205 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:41:26.006573 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:41:26.006734 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:41:26.006878 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:41:25 UTC (1747442485) May 17 00:41:26.007004 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:41:26.007034 kernel: intel_pstate: CPU model not supported May 17 00:41:26.007049 kernel: NET: Registered PF_INET6 protocol family May 17 00:41:26.007062 kernel: Segment Routing with IPv6 May 17 00:41:26.007074 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:41:26.007087 kernel: NET: Registered PF_PACKET protocol family May 17 00:41:26.007101 kernel: Key type dns_resolver registered May 17 00:41:26.007115 kernel: IPI shorthand broadcast: enabled May 17 00:41:26.007129 kernel: sched_clock: Marking stable (681284276, 83343814)->(886203240, -121575150) May 17 00:41:26.007142 kernel: registered taskstats version 1 May 17 00:41:26.007155 kernel: Loading compiled-in X.509 certificates May 17 00:41:26.007173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:41:26.007185 kernel: Key type .fscrypt registered May 17 00:41:26.007198 kernel: Key type fscrypt-provisioning registered May 17 00:41:26.007213 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:41:26.007227 kernel: ima: Allocated hash algorithm: sha1 May 17 00:41:26.007242 kernel: ima: No architecture policies found May 17 00:41:26.007256 kernel: clk: Disabling unused clocks May 17 00:41:26.007269 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:41:26.007319 kernel: Write protecting the kernel read-only data: 28672k May 17 00:41:26.007335 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:41:26.007351 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:41:26.007365 kernel: Run /init as init process May 17 00:41:26.007379 kernel: with arguments: May 17 00:41:26.007395 kernel: /init May 17 00:41:26.007439 kernel: with environment: May 17 00:41:26.007457 kernel: HOME=/ May 17 00:41:26.007472 kernel: TERM=linux May 17 00:41:26.007486 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:41:26.007511 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:26.007530 systemd[1]: Detected virtualization kvm. May 17 00:41:26.007547 systemd[1]: Detected architecture x86-64. May 17 00:41:26.007563 systemd[1]: Running in initrd. May 17 00:41:26.007578 systemd[1]: No hostname configured, using default hostname. May 17 00:41:26.007593 systemd[1]: Hostname set to . May 17 00:41:26.007614 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:26.007629 systemd[1]: Queued start job for default target initrd.target. May 17 00:41:26.007644 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:26.007659 systemd[1]: Reached target cryptsetup.target. May 17 00:41:26.007673 systemd[1]: Reached target paths.target. May 17 00:41:26.007688 systemd[1]: Reached target slices.target. May 17 00:41:26.007701 systemd[1]: Reached target swap.target. May 17 00:41:26.007716 systemd[1]: Reached target timers.target. May 17 00:41:26.007736 systemd[1]: Listening on iscsid.socket. May 17 00:41:26.007753 systemd[1]: Listening on iscsiuio.socket. May 17 00:41:26.007769 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:26.007785 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:26.007800 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:26.007817 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:26.007833 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:26.007852 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:26.007868 systemd[1]: Reached target sockets.target. May 17 00:41:26.007887 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:26.007902 systemd[1]: Finished network-cleanup.service. May 17 00:41:26.007921 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:41:26.007936 systemd[1]: Starting systemd-journald.service... May 17 00:41:26.007950 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:26.007968 systemd[1]: Starting systemd-resolved.service... May 17 00:41:26.007984 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:41:26.007998 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:26.008013 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:41:26.008037 systemd-journald[183]: Journal started May 17 00:41:26.008149 systemd-journald[183]: Runtime Journal (/run/log/journal/f75c14e840f44dc3a6cf385146f6332b) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:25.991618 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:41:26.036744 systemd[1]: Started systemd-journald.service. May 17 00:41:26.013953 systemd-resolved[185]: Positive Trust Anchors: May 17 00:41:26.013972 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:26.047901 kernel: audit: type=1130 audit(1747442486.032:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.047941 kernel: audit: type=1130 audit(1747442486.034:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.047960 kernel: audit: type=1130 audit(1747442486.034:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.014031 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:26.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.056405 kernel: audit: type=1130 audit(1747442486.051:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.018905 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:41:26.034648 systemd[1]: Started systemd-resolved.service. May 17 00:41:26.035187 systemd[1]: Reached target nss-lookup.target. May 17 00:41:26.047879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:26.049958 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:41:26.057848 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:41:26.065802 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:41:26.065875 kernel: audit: type=1130 audit(1747442486.064:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.064560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:26.077614 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:41:26.078348 kernel: Bridge firewalling registered May 17 00:41:26.081116 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:41:26.090754 kernel: audit: type=1130 audit(1747442486.081:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.083109 systemd[1]: Starting dracut-cmdline.service... May 17 00:41:26.101255 dracut-cmdline[203]: dracut-dracut-053 May 17 00:41:26.105148 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:26.110330 kernel: SCSI subsystem initialized May 17 00:41:26.127506 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:41:26.127584 kernel: device-mapper: uevent: version 1.0.3 May 17 00:41:26.137649 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:41:26.142148 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:41:26.143464 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:26.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.145233 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:26.149399 kernel: audit: type=1130 audit(1747442486.143:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.158672 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:26.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.163348 kernel: audit: type=1130 audit(1747442486.158:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.200363 kernel: Loading iSCSI transport class v2.0-870. May 17 00:41:26.222328 kernel: iscsi: registered transport (tcp) May 17 00:41:26.251330 kernel: iscsi: registered transport (qla4xxx) May 17 00:41:26.252325 kernel: QLogic iSCSI HBA Driver May 17 00:41:26.303279 systemd[1]: Finished dracut-cmdline.service. May 17 00:41:26.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.308356 kernel: audit: type=1130 audit(1747442486.303:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.306569 systemd[1]: Starting dracut-pre-udev.service... May 17 00:41:26.372372 kernel: raid6: avx2x4 gen() 14266 MB/s May 17 00:41:26.389384 kernel: raid6: avx2x4 xor() 6005 MB/s May 17 00:41:26.406384 kernel: raid6: avx2x2 gen() 15083 MB/s May 17 00:41:26.423405 kernel: raid6: avx2x2 xor() 13463 MB/s May 17 00:41:26.440382 kernel: raid6: avx2x1 gen() 17499 MB/s May 17 00:41:26.457415 kernel: raid6: avx2x1 xor() 16304 MB/s May 17 00:41:26.474385 kernel: raid6: sse2x4 gen() 10687 MB/s May 17 00:41:26.491385 kernel: raid6: sse2x4 xor() 5678 MB/s May 17 00:41:26.508384 kernel: raid6: sse2x2 gen() 9546 MB/s May 17 00:41:26.525380 kernel: raid6: sse2x2 xor() 7033 MB/s May 17 00:41:26.542377 kernel: raid6: sse2x1 gen() 7787 MB/s May 17 00:41:26.559663 kernel: raid6: sse2x1 xor() 5303 MB/s May 17 00:41:26.559787 kernel: raid6: using algorithm avx2x1 gen() 17499 MB/s May 17 00:41:26.559807 kernel: raid6: .... xor() 16304 MB/s, rmw enabled May 17 00:41:26.560726 kernel: raid6: using avx2x2 recovery algorithm May 17 00:41:26.574349 kernel: xor: automatically using best checksumming function avx May 17 00:41:26.707364 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:41:26.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.723000 audit: BPF prog-id=7 op=LOAD May 17 00:41:26.723000 audit: BPF prog-id=8 op=LOAD May 17 00:41:26.722923 systemd[1]: Finished dracut-pre-udev.service. May 17 00:41:26.725011 systemd[1]: Starting systemd-udevd.service... May 17 00:41:26.746830 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 17 00:41:26.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.755460 systemd[1]: Started systemd-udevd.service. May 17 00:41:26.759866 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:41:26.785512 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation May 17 00:41:26.835519 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:41:26.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.837376 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:26.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:26.906345 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:27.002342 kernel: scsi host0: Virtio SCSI HBA May 17 00:41:27.006576 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:41:27.083661 kernel: ACPI: bus type USB registered May 17 00:41:27.083701 kernel: usbcore: registered new interface driver usbfs May 17 00:41:27.083721 kernel: usbcore: registered new interface driver hub May 17 00:41:27.083739 kernel: usbcore: registered new device driver usb May 17 00:41:27.083757 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:41:27.083777 kernel: GPT:9289727 != 125829119 May 17 00:41:27.083795 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:41:27.083813 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver May 17 00:41:27.083837 kernel: GPT:9289727 != 125829119 May 17 00:41:27.083849 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:41:27.083860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:27.083871 kernel: ehci-pci: EHCI PCI platform driver May 17 00:41:27.083882 kernel: uhci_hcd: USB Universal Host Controller Interface driver May 17 00:41:27.083893 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:41:27.083904 kernel: libata version 3.00 loaded. May 17 00:41:27.086350 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) May 17 00:41:27.105790 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:41:27.105948 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:41:27.106053 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:41:27.106153 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 May 17 00:41:27.106327 kernel: hub 1-0:1.0: USB hub found May 17 00:41:27.106505 kernel: hub 1-0:1.0: 2 ports detected May 17 00:41:27.141583 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:41:27.143768 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) May 17 00:41:27.143804 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:41:27.173931 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:41:27.173964 kernel: AES CTR mode by8 optimization enabled May 17 00:41:27.173982 kernel: scsi host1: ata_piix May 17 00:41:27.174236 kernel: scsi host2: ata_piix May 17 00:41:27.174438 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:41:27.174459 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:41:27.148912 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:41:27.152907 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:41:27.158555 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:41:27.216098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:27.218095 systemd[1]: Starting disk-uuid.service... May 17 00:41:27.225053 disk-uuid[505]: Primary Header is updated. May 17 00:41:27.225053 disk-uuid[505]: Secondary Entries is updated. May 17 00:41:27.225053 disk-uuid[505]: Secondary Header is updated. May 17 00:41:27.247369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:27.253324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:28.260444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:28.260559 disk-uuid[506]: The operation has completed successfully. May 17 00:41:28.324013 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:41:28.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.324189 systemd[1]: Finished disk-uuid.service. May 17 00:41:28.326061 systemd[1]: Starting verity-setup.service... May 17 00:41:28.350695 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:41:28.412301 systemd[1]: Found device dev-mapper-usr.device. May 17 00:41:28.414059 systemd[1]: Mounting sysusr-usr.mount... May 17 00:41:28.416347 systemd[1]: Finished verity-setup.service. May 17 00:41:28.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.507515 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:41:28.508097 systemd[1]: Mounted sysusr-usr.mount. May 17 00:41:28.508813 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:41:28.510114 systemd[1]: Starting ignition-setup.service... May 17 00:41:28.511696 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:41:28.527348 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:28.527427 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:28.527440 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:28.553931 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:41:28.563958 systemd[1]: Finished ignition-setup.service. May 17 00:41:28.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.566216 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:41:28.664462 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:41:28.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.665000 audit: BPF prog-id=9 op=LOAD May 17 00:41:28.666537 systemd[1]: Starting systemd-networkd.service... May 17 00:41:28.699797 systemd-networkd[689]: lo: Link UP May 17 00:41:28.699810 systemd-networkd[689]: lo: Gained carrier May 17 00:41:28.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.700528 systemd-networkd[689]: Enumeration completed May 17 00:41:28.700663 systemd[1]: Started systemd-networkd.service. May 17 00:41:28.701167 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:41:28.701663 systemd[1]: Reached target network.target. May 17 00:41:28.703146 systemd[1]: Starting iscsiuio.service... May 17 00:41:28.706239 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:41:28.707162 systemd-networkd[689]: eth1: Link UP May 17 00:41:28.707167 systemd-networkd[689]: eth1: Gained carrier May 17 00:41:28.711726 systemd-networkd[689]: eth0: Link UP May 17 00:41:28.711734 systemd-networkd[689]: eth0: Gained carrier May 17 00:41:28.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.725351 systemd[1]: Started iscsiuio.service. May 17 00:41:28.725455 systemd-networkd[689]: eth0: DHCPv4 address 137.184.190.96/20, gateway 137.184.176.1 acquired from 169.254.169.253 May 17 00:41:28.727527 systemd[1]: Starting iscsid.service... May 17 00:41:28.732503 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.23/20 acquired from 169.254.169.253 May 17 00:41:28.735087 iscsid[695]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:28.735087 iscsid[695]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:41:28.735087 iscsid[695]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:41:28.735087 iscsid[695]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:41:28.735087 iscsid[695]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:28.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.746857 iscsid[695]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:41:28.738315 systemd[1]: Started iscsid.service. May 17 00:41:28.747468 systemd[1]: Starting dracut-initqueue.service... May 17 00:41:28.754916 ignition[621]: Ignition 2.14.0 May 17 00:41:28.754928 ignition[621]: Stage: fetch-offline May 17 00:41:28.754994 ignition[621]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.755025 ignition[621]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.761183 ignition[621]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.762228 ignition[621]: parsed url from cmdline: "" May 17 00:41:28.762239 ignition[621]: no config URL provided May 17 00:41:28.762253 ignition[621]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:28.762272 ignition[621]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:28.762284 ignition[621]: failed to fetch config: resource requires networking May 17 00:41:28.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.764714 systemd[1]: Finished dracut-initqueue.service. May 17 00:41:28.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.762454 ignition[621]: Ignition finished successfully May 17 00:41:28.765755 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:41:28.766351 systemd[1]: Reached target remote-fs-pre.target. May 17 00:41:28.766750 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:28.767425 systemd[1]: Reached target remote-fs.target. May 17 00:41:28.769219 systemd[1]: Starting dracut-pre-mount.service... May 17 00:41:28.771871 systemd[1]: Starting ignition-fetch.service... May 17 00:41:28.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.782548 systemd[1]: Finished dracut-pre-mount.service. May 17 00:41:28.786573 ignition[705]: Ignition 2.14.0 May 17 00:41:28.787318 ignition[705]: Stage: fetch May 17 00:41:28.787817 ignition[705]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.788424 ignition[705]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.791068 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.791885 ignition[705]: parsed url from cmdline: "" May 17 00:41:28.791978 ignition[705]: no config URL provided May 17 00:41:28.792421 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:28.792887 ignition[705]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:28.793332 ignition[705]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:41:28.836003 ignition[705]: GET result: OK May 17 00:41:28.836169 ignition[705]: parsing config with SHA512: 769ddfafdacf96a6de4a84329e62e8e2263b20945b0a6f4f22f52f3a1782a6e8c2d4cd594814741a6a6fa95d1842197003f1cadf116d522341b56f4d8c38357a May 17 00:41:28.848272 unknown[705]: fetched base config from "system" May 17 00:41:28.848307 unknown[705]: fetched base config from "system" May 17 00:41:28.848316 unknown[705]: fetched user config from "digitalocean" May 17 00:41:28.849083 ignition[705]: fetch: fetch complete May 17 00:41:28.849093 ignition[705]: fetch: fetch passed May 17 00:41:28.849163 ignition[705]: Ignition finished successfully May 17 00:41:28.851078 systemd[1]: Finished ignition-fetch.service. May 17 00:41:28.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.852917 systemd[1]: Starting ignition-kargs.service... May 17 00:41:28.867124 ignition[715]: Ignition 2.14.0 May 17 00:41:28.867811 ignition[715]: Stage: kargs May 17 00:41:28.868308 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.868826 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.871606 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.874273 ignition[715]: kargs: kargs passed May 17 00:41:28.875750 ignition[715]: Ignition finished successfully May 17 00:41:28.877887 systemd[1]: Finished ignition-kargs.service. May 17 00:41:28.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.879740 systemd[1]: Starting ignition-disks.service... May 17 00:41:28.891585 ignition[721]: Ignition 2.14.0 May 17 00:41:28.891597 ignition[721]: Stage: disks May 17 00:41:28.891732 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.891763 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.894324 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.895560 ignition[721]: disks: disks passed May 17 00:41:28.895622 ignition[721]: Ignition finished successfully May 17 00:41:28.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.896589 systemd[1]: Finished ignition-disks.service. May 17 00:41:28.897267 systemd[1]: Reached target initrd-root-device.target. May 17 00:41:28.897862 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:28.898407 systemd[1]: Reached target local-fs.target. May 17 00:41:28.898983 systemd[1]: Reached target sysinit.target. May 17 00:41:28.899664 systemd[1]: Reached target basic.target. May 17 00:41:28.901274 systemd[1]: Starting systemd-fsck-root.service... May 17 00:41:28.920497 systemd-fsck[729]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:41:28.925425 systemd[1]: Finished systemd-fsck-root.service. May 17 00:41:28.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.927240 systemd[1]: Mounting sysroot.mount... May 17 00:41:28.945315 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:41:28.946208 systemd[1]: Mounted sysroot.mount. May 17 00:41:28.946881 systemd[1]: Reached target initrd-root-fs.target. May 17 00:41:28.949318 systemd[1]: Mounting sysroot-usr.mount... May 17 00:41:28.951257 systemd[1]: Starting flatcar-digitalocean-network.service... May 17 00:41:28.953962 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:41:28.954718 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:41:28.954772 systemd[1]: Reached target ignition-diskful.target. May 17 00:41:28.957098 systemd[1]: Mounted sysroot-usr.mount. May 17 00:41:28.959111 systemd[1]: Starting initrd-setup-root.service... May 17 00:41:28.971087 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:41:28.991098 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory May 17 00:41:29.004109 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:41:29.016177 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:41:29.100393 coreos-metadata[735]: May 17 00:41:29.100 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:29.104149 systemd[1]: Finished initrd-setup-root.service. May 17 00:41:29.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.106733 systemd[1]: Starting ignition-mount.service... May 17 00:41:29.109001 systemd[1]: Starting sysroot-boot.service... May 17 00:41:29.119357 coreos-metadata[735]: May 17 00:41:29.117 INFO Fetch successful May 17 00:41:29.123653 coreos-metadata[736]: May 17 00:41:29.123 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:29.127037 bash[786]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:41:29.131980 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:41:29.132124 systemd[1]: Finished flatcar-digitalocean-network.service. May 17 00:41:29.136365 coreos-metadata[736]: May 17 00:41:29.135 INFO Fetch successful May 17 00:41:29.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.142486 coreos-metadata[736]: May 17 00:41:29.142 INFO wrote hostname ci-3510.3.7-n-9c3fefbd06 to /sysroot/etc/hostname May 17 00:41:29.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.149212 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:41:29.153458 ignition[787]: INFO : Ignition 2.14.0 May 17 00:41:29.154304 ignition[787]: INFO : Stage: mount May 17 00:41:29.154829 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:29.155441 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:29.158450 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:29.166725 ignition[787]: INFO : mount: mount passed May 17 00:41:29.167241 ignition[787]: INFO : Ignition finished successfully May 17 00:41:29.167688 systemd[1]: Finished sysroot-boot.service. May 17 00:41:29.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.169570 systemd[1]: Finished ignition-mount.service. May 17 00:41:29.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:29.436206 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:41:29.447891 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (796) May 17 00:41:29.449333 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:29.449413 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:29.450694 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:29.456957 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:41:29.459085 systemd[1]: Starting ignition-files.service... May 17 00:41:29.493489 ignition[816]: INFO : Ignition 2.14.0 May 17 00:41:29.493489 ignition[816]: INFO : Stage: files May 17 00:41:29.495002 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:29.495002 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:29.496165 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:29.499648 ignition[816]: DEBUG : files: compiled without relabeling support, skipping May 17 00:41:29.502084 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:41:29.502084 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:41:29.505084 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:41:29.505985 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:41:29.507687 unknown[816]: wrote ssh authorized keys file for user: core May 17 00:41:29.510408 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:41:29.510408 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:41:29.510408 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:41:29.510408 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:41:29.510408 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:41:29.545100 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:41:29.743327 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:29.744398 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:29.750231 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:41:30.019565 systemd-networkd[689]: eth1: Gained IPv6LL May 17 00:41:30.424644 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:41:30.467724 systemd-networkd[689]: eth0: Gained IPv6LL May 17 00:41:30.723166 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:30.724464 ignition[816]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:30.725040 ignition[816]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:30.725670 ignition[816]: INFO : files: op(d): [started] processing unit "containerd.service" May 17 00:41:30.726886 ignition[816]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:41:30.728397 ignition[816]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:41:30.729486 ignition[816]: INFO : files: op(d): [finished] processing unit "containerd.service" May 17 00:41:30.729486 ignition[816]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 17 00:41:30.729486 ignition[816]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:30.733455 ignition[816]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:30.733455 ignition[816]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 17 00:41:30.733455 ignition[816]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:30.733455 ignition[816]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:30.733455 ignition[816]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 00:41:30.733455 ignition[816]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:41:30.739764 ignition[816]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:30.740884 ignition[816]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:30.741574 ignition[816]: INFO : files: files passed May 17 00:41:30.742479 ignition[816]: INFO : Ignition finished successfully May 17 00:41:30.745365 systemd[1]: Finished ignition-files.service. May 17 00:41:30.751618 kernel: kauditd_printk_skb: 28 callbacks suppressed May 17 00:41:30.751664 kernel: audit: type=1130 audit(1747442490.745:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.748204 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:41:30.752846 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:41:30.755646 systemd[1]: Starting ignition-quench.service... May 17 00:41:30.760905 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:41:30.761873 systemd[1]: Finished ignition-quench.service. May 17 00:41:30.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.765655 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:41:30.773356 kernel: audit: type=1130 audit(1747442490.762:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.773402 kernel: audit: type=1131 audit(1747442490.764:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.773448 kernel: audit: type=1130 audit(1747442490.768:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.765546 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:41:30.769465 systemd[1]: Reached target ignition-complete.target. May 17 00:41:30.775077 systemd[1]: Starting initrd-parse-etc.service... May 17 00:41:30.797998 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:41:30.799065 systemd[1]: Finished initrd-parse-etc.service. May 17 00:41:30.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.802717 systemd[1]: Reached target initrd-fs.target. May 17 00:41:30.807517 kernel: audit: type=1130 audit(1747442490.799:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.807566 kernel: audit: type=1131 audit(1747442490.802:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.807929 systemd[1]: Reached target initrd.target. May 17 00:41:30.808610 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:41:30.810379 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:41:30.827048 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:41:30.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.831026 systemd[1]: Starting initrd-cleanup.service... May 17 00:41:30.831819 kernel: audit: type=1130 audit(1747442490.827:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.845498 systemd[1]: Stopped target nss-lookup.target. May 17 00:41:30.846700 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:41:30.847711 systemd[1]: Stopped target timers.target. May 17 00:41:30.854828 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:41:30.855588 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:41:30.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.858979 systemd[1]: Stopped target initrd.target. May 17 00:41:30.859517 kernel: audit: type=1131 audit(1747442490.856:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.860070 systemd[1]: Stopped target basic.target. May 17 00:41:30.860924 systemd[1]: Stopped target ignition-complete.target. May 17 00:41:30.861896 systemd[1]: Stopped target ignition-diskful.target. May 17 00:41:30.862820 systemd[1]: Stopped target initrd-root-device.target. May 17 00:41:30.863814 systemd[1]: Stopped target remote-fs.target. May 17 00:41:30.864757 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:41:30.865693 systemd[1]: Stopped target sysinit.target. May 17 00:41:30.866723 systemd[1]: Stopped target local-fs.target. May 17 00:41:30.867515 systemd[1]: Stopped target local-fs-pre.target. May 17 00:41:30.868369 systemd[1]: Stopped target swap.target. May 17 00:41:30.869158 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:41:30.869781 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:41:30.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.871723 systemd[1]: Stopped target cryptsetup.target. May 17 00:41:30.875221 kernel: audit: type=1131 audit(1747442490.870:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.874074 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:41:30.874314 systemd[1]: Stopped dracut-initqueue.service. May 17 00:41:30.878906 kernel: audit: type=1131 audit(1747442490.874:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.875094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:41:30.875283 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:41:30.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.876009 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:41:30.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.876217 systemd[1]: Stopped ignition-files.service. May 17 00:41:30.879713 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:41:30.879912 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:41:30.883080 systemd[1]: Stopping ignition-mount.service... May 17 00:41:30.884433 systemd[1]: Stopping iscsiuio.service... May 17 00:41:30.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.890277 systemd[1]: Stopping sysroot-boot.service... May 17 00:41:30.890772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:41:30.891120 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:41:30.891804 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:41:30.892040 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:41:30.894978 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:41:30.895146 systemd[1]: Stopped iscsiuio.service. May 17 00:41:30.903950 ignition[854]: INFO : Ignition 2.14.0 May 17 00:41:30.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.905404 ignition[854]: INFO : Stage: umount May 17 00:41:30.905404 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:30.905404 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:30.907102 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:41:30.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.907246 systemd[1]: Finished initrd-cleanup.service. May 17 00:41:30.911376 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:30.913696 ignition[854]: INFO : umount: umount passed May 17 00:41:30.914394 ignition[854]: INFO : Ignition finished successfully May 17 00:41:30.916053 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:41:30.916193 systemd[1]: Stopped ignition-mount.service. May 17 00:41:30.916998 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:41:30.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.917076 systemd[1]: Stopped ignition-disks.service. May 17 00:41:30.917603 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:41:30.917740 systemd[1]: Stopped ignition-kargs.service. May 17 00:41:30.918074 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:41:30.918112 systemd[1]: Stopped ignition-fetch.service. May 17 00:41:30.918684 systemd[1]: Stopped target network.target. May 17 00:41:30.919023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:41:30.919075 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:41:30.919523 systemd[1]: Stopped target paths.target. May 17 00:41:30.919900 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:41:30.923378 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:41:30.923975 systemd[1]: Stopped target slices.target. May 17 00:41:30.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.924661 systemd[1]: Stopped target sockets.target. May 17 00:41:30.925251 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:41:30.925330 systemd[1]: Closed iscsid.socket. May 17 00:41:30.925940 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:41:30.926002 systemd[1]: Closed iscsiuio.socket. May 17 00:41:30.927621 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:41:30.927696 systemd[1]: Stopped ignition-setup.service. May 17 00:41:30.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.929945 systemd[1]: Stopping systemd-networkd.service... May 17 00:41:30.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.930768 systemd[1]: Stopping systemd-resolved.service... May 17 00:41:30.932645 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:41:30.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.933392 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:41:30.933528 systemd[1]: Stopped sysroot-boot.service. May 17 00:41:30.940000 audit: BPF prog-id=6 op=UNLOAD May 17 00:41:30.934455 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:41:30.934513 systemd[1]: Stopped initrd-setup-root.service. May 17 00:41:30.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.935723 systemd-networkd[689]: eth1: DHCPv6 lease lost May 17 00:41:30.937207 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:41:30.937392 systemd[1]: Stopped systemd-resolved.service. May 17 00:41:30.939598 systemd-networkd[689]: eth0: DHCPv6 lease lost May 17 00:41:30.944000 audit: BPF prog-id=9 op=UNLOAD May 17 00:41:30.941025 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:41:30.941161 systemd[1]: Stopped systemd-networkd.service. May 17 00:41:30.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.942774 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:41:30.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.942838 systemd[1]: Closed systemd-networkd.socket. May 17 00:41:30.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.944699 systemd[1]: Stopping network-cleanup.service... May 17 00:41:30.945119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:41:30.945223 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:41:30.946220 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:41:30.946318 systemd[1]: Stopped systemd-sysctl.service. May 17 00:41:30.947032 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:41:30.947082 systemd[1]: Stopped systemd-modules-load.service. May 17 00:41:30.952389 systemd[1]: Stopping systemd-udevd.service... May 17 00:41:30.954270 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:41:30.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.959723 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:41:30.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.959918 systemd[1]: Stopped systemd-udevd.service. May 17 00:41:30.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.960764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:41:30.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.960839 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:41:30.961402 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:41:30.961465 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:41:30.962070 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:41:30.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.962154 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:41:30.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.962695 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:41:30.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.962759 systemd[1]: Stopped dracut-cmdline.service. May 17 00:41:30.963623 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:41:30.963684 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:41:30.965735 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:41:30.966191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:41:30.966272 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:41:30.967159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:41:30.967211 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:41:30.967650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:41:30.967701 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:41:30.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.976690 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:41:30.977517 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:41:30.977683 systemd[1]: Stopped network-cleanup.service. May 17 00:41:30.984269 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:41:30.984428 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:41:30.985451 systemd[1]: Reached target initrd-switch-root.target. May 17 00:41:30.987707 systemd[1]: Starting initrd-switch-root.service... May 17 00:41:30.999514 systemd[1]: Switching root. May 17 00:41:31.004000 audit: BPF prog-id=5 op=UNLOAD May 17 00:41:31.004000 audit: BPF prog-id=4 op=UNLOAD May 17 00:41:31.004000 audit: BPF prog-id=3 op=UNLOAD May 17 00:41:31.004000 audit: BPF prog-id=8 op=UNLOAD May 17 00:41:31.004000 audit: BPF prog-id=7 op=UNLOAD May 17 00:41:31.023272 iscsid[695]: iscsid shutting down. May 17 00:41:31.024179 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:41:31.024274 systemd-journald[183]: Journal stopped May 17 00:41:34.678638 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:41:34.678723 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:41:34.678747 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:41:34.678760 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:41:34.678786 kernel: SELinux: policy capability open_perms=1 May 17 00:41:34.678798 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:41:34.678815 kernel: SELinux: policy capability always_check_network=0 May 17 00:41:34.678827 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:41:34.678839 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:41:34.678850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:41:34.678864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:41:34.678878 systemd[1]: Successfully loaded SELinux policy in 55.014ms. May 17 00:41:34.678901 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.785ms. May 17 00:41:34.678916 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:34.678930 systemd[1]: Detected virtualization kvm. May 17 00:41:34.678943 systemd[1]: Detected architecture x86-64. May 17 00:41:34.678961 systemd[1]: Detected first boot. May 17 00:41:34.678976 systemd[1]: Hostname set to . May 17 00:41:34.678994 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:34.679008 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:41:34.679020 systemd[1]: Populated /etc with preset unit settings. May 17 00:41:34.679033 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:34.679048 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:34.679062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:34.679077 systemd[1]: Queued start job for default target multi-user.target. May 17 00:41:34.679095 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:41:34.679108 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:41:34.679124 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:41:34.679137 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:41:34.679149 systemd[1]: Created slice system-getty.slice. May 17 00:41:34.679173 systemd[1]: Created slice system-modprobe.slice. May 17 00:41:34.679192 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:41:34.679209 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:41:34.679227 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:41:34.679252 systemd[1]: Created slice user.slice. May 17 00:41:34.679269 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:34.679304 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:41:34.679319 systemd[1]: Set up automount boot.automount. May 17 00:41:34.679332 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:41:34.679352 systemd[1]: Reached target integritysetup.target. May 17 00:41:34.679373 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:34.679400 systemd[1]: Reached target remote-fs.target. May 17 00:41:34.679418 systemd[1]: Reached target slices.target. May 17 00:41:34.679431 systemd[1]: Reached target swap.target. May 17 00:41:34.679443 systemd[1]: Reached target torcx.target. May 17 00:41:34.679459 systemd[1]: Reached target veritysetup.target. May 17 00:41:34.679483 systemd[1]: Listening on systemd-coredump.socket. May 17 00:41:34.679503 systemd[1]: Listening on systemd-initctl.socket. May 17 00:41:34.679523 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:34.679542 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:34.679562 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:34.679583 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:34.679603 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:34.679624 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:34.679643 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:41:34.679664 systemd[1]: Mounting dev-hugepages.mount... May 17 00:41:34.679686 systemd[1]: Mounting dev-mqueue.mount... May 17 00:41:34.679706 systemd[1]: Mounting media.mount... May 17 00:41:34.679727 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.679751 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:41:34.679772 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:41:34.679792 systemd[1]: Mounting tmp.mount... May 17 00:41:34.679814 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:41:34.679829 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.679848 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:34.679869 systemd[1]: Starting modprobe@configfs.service... May 17 00:41:34.679891 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.679913 systemd[1]: Starting modprobe@drm.service... May 17 00:41:34.679937 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:34.679958 systemd[1]: Starting modprobe@fuse.service... May 17 00:41:34.679979 systemd[1]: Starting modprobe@loop.service... May 17 00:41:34.680002 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:34.680022 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:41:34.680042 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:41:34.680063 systemd[1]: Starting systemd-journald.service... May 17 00:41:34.680084 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:34.680104 systemd[1]: Starting systemd-network-generator.service... May 17 00:41:34.680130 systemd[1]: Starting systemd-remount-fs.service... May 17 00:41:34.680151 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:34.680173 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.680194 systemd[1]: Mounted dev-hugepages.mount. May 17 00:41:34.680211 systemd[1]: Mounted dev-mqueue.mount. May 17 00:41:34.680224 systemd[1]: Mounted media.mount. May 17 00:41:34.680236 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:41:34.680250 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:41:34.680262 systemd[1]: Mounted tmp.mount. May 17 00:41:34.680278 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:34.698362 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:41:34.698410 systemd[1]: Finished modprobe@configfs.service. May 17 00:41:34.698424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.698439 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.698453 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:34.698468 systemd[1]: Finished modprobe@drm.service. May 17 00:41:34.698486 kernel: loop: module loaded May 17 00:41:34.698505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:34.698532 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:34.698551 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:34.698570 systemd[1]: Finished modprobe@loop.service. May 17 00:41:34.698585 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:34.698597 systemd[1]: Finished systemd-network-generator.service. May 17 00:41:34.698614 systemd[1]: Finished systemd-remount-fs.service. May 17 00:41:34.698626 systemd[1]: Reached target network-pre.target. May 17 00:41:34.698640 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:41:34.698654 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:34.698667 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:41:34.698679 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:34.698691 systemd[1]: Starting systemd-random-seed.service... May 17 00:41:34.698704 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:34.698717 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:34.698733 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:41:34.698748 systemd[1]: Finished systemd-random-seed.service. May 17 00:41:34.698777 systemd[1]: Reached target first-boot-complete.target. May 17 00:41:34.698799 kernel: fuse: init (API version 7.34) May 17 00:41:34.698817 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:41:34.698836 systemd[1]: Finished modprobe@fuse.service. May 17 00:41:34.698858 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:41:34.698873 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:41:34.698886 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:34.698912 systemd-journald[992]: Journal started May 17 00:41:34.699011 systemd-journald[992]: Runtime Journal (/run/log/journal/f75c14e840f44dc3a6cf385146f6332b) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:34.429000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:34.429000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:41:34.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.701514 systemd[1]: Started systemd-journald.service. May 17 00:41:34.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.657000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:41:34.657000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffedec95ce0 a2=4000 a3=7ffedec95d7c items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:34.657000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:41:34.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.712458 systemd-journald[992]: Time spent on flushing to /var/log/journal/f75c14e840f44dc3a6cf385146f6332b is 46.238ms for 1088 entries. May 17 00:41:34.712458 systemd-journald[992]: System Journal (/var/log/journal/f75c14e840f44dc3a6cf385146f6332b) is 8.0M, max 195.6M, 187.6M free. May 17 00:41:34.771515 systemd-journald[992]: Received client request to flush runtime journal. May 17 00:41:34.704486 systemd[1]: Starting systemd-journal-flush.service... May 17 00:41:34.772916 systemd[1]: Finished systemd-journal-flush.service. May 17 00:41:34.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.783203 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:34.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.785819 systemd[1]: Starting systemd-udev-settle.service... May 17 00:41:34.787424 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:41:34.790385 systemd[1]: Starting systemd-sysusers.service... May 17 00:41:34.816535 udevadm[1043]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:41:34.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.836789 systemd[1]: Finished systemd-sysusers.service. May 17 00:41:34.839353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:34.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.879106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:35.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.549742 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:41:35.551879 systemd[1]: Starting systemd-udevd.service... May 17 00:41:35.579211 systemd-udevd[1052]: Using default interface naming scheme 'v252'. May 17 00:41:35.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.608136 systemd[1]: Started systemd-udevd.service. May 17 00:41:35.613096 systemd[1]: Starting systemd-networkd.service... May 17 00:41:35.621269 systemd[1]: Starting systemd-userdbd.service... May 17 00:41:35.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.683061 systemd[1]: Started systemd-userdbd.service. May 17 00:41:35.707124 systemd[1]: Found device dev-ttyS0.device. May 17 00:41:35.725453 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.725949 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:35.727638 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:35.731387 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:35.736548 systemd[1]: Starting modprobe@loop.service... May 17 00:41:35.737331 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:35.737451 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:35.737614 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.739268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:35.739536 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:35.740198 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:35.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.747086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:35.747372 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:35.748580 kernel: kauditd_printk_skb: 83 callbacks suppressed May 17 00:41:35.748638 kernel: audit: type=1130 audit(1747442495.747:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.749007 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:35.749202 systemd[1]: Finished modprobe@loop.service. May 17 00:41:35.756534 kernel: audit: type=1131 audit(1747442495.747:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.756684 kernel: audit: type=1130 audit(1747442495.754:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.755108 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:35.765866 kernel: audit: type=1131 audit(1747442495.754:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.829627 systemd-networkd[1062]: lo: Link UP May 17 00:41:35.829640 systemd-networkd[1062]: lo: Gained carrier May 17 00:41:35.830331 systemd-networkd[1062]: Enumeration completed May 17 00:41:35.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.830434 systemd-networkd[1062]: eth1: Configuring with /run/systemd/network/10-66:c8:56:c0:b1:d0.network. May 17 00:41:35.830538 systemd[1]: Started systemd-networkd.service. May 17 00:41:35.833425 kernel: audit: type=1130 audit(1747442495.830:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.834886 systemd-networkd[1062]: eth0: Configuring with /run/systemd/network/10-f2:cc:68:d3:83:3f.network. May 17 00:41:35.835732 systemd-networkd[1062]: eth1: Link UP May 17 00:41:35.835742 systemd-networkd[1062]: eth1: Gained carrier May 17 00:41:35.848866 systemd-networkd[1062]: eth0: Link UP May 17 00:41:35.848880 systemd-networkd[1062]: eth0: Gained carrier May 17 00:41:35.860000 audit[1069]: AVC avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:41:35.869118 kernel: audit: type=1400 audit(1747442495.860:128): avc: denied { confidentiality } for pid=1069 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:41:35.869247 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:41:35.877331 kernel: ACPI: button: Power Button [PWRF] May 17 00:41:35.860000 audit[1069]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c1811eb680 a1=338ac a2=7f7bba12dbc5 a3=5 items=110 ppid=1052 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.860000 audit: CWD cwd="/" May 17 00:41:35.888054 kernel: audit: type=1300 audit(1747442495.860:128): arch=c000003e syscall=175 success=yes exit=0 a0=55c1811eb680 a1=338ac a2=7f7bba12dbc5 a3=5 items=110 ppid=1052 pid=1069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.888128 kernel: audit: type=1307 audit(1747442495.860:128): cwd="/" May 17 00:41:35.860000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.890607 kernel: audit: type=1302 audit(1747442495.860:128): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.890693 kernel: audit: type=1302 audit(1747442495.860:128): item=1 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=1 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=2 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=3 name=(null) inode=13600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=4 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=5 name=(null) inode=13601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=6 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=7 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=8 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=9 name=(null) inode=13603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=10 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=11 name=(null) inode=13604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=12 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=13 name=(null) inode=13605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=14 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=15 name=(null) inode=13606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=16 name=(null) inode=13602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=17 name=(null) inode=13607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=18 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=19 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=20 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=21 name=(null) inode=13609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=22 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=23 name=(null) inode=13610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=24 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=25 name=(null) inode=13611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=26 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=27 name=(null) inode=13612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=28 name=(null) inode=13608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=29 name=(null) inode=13613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=30 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=31 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=32 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=33 name=(null) inode=13615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=34 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=35 name=(null) inode=13616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=36 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=37 name=(null) inode=13617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=38 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=39 name=(null) inode=13618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=40 name=(null) inode=13614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=41 name=(null) inode=13619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=42 name=(null) inode=13599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=43 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=44 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=45 name=(null) inode=13621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=46 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=47 name=(null) inode=13622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=48 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=49 name=(null) inode=13623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=50 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=51 name=(null) inode=13624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=52 name=(null) inode=13620 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=53 name=(null) inode=13625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=55 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=56 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=57 name=(null) inode=13627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=58 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=59 name=(null) inode=13628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=60 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=61 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=62 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=63 name=(null) inode=13630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=64 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=65 name=(null) inode=13631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=66 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=67 name=(null) inode=13632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=68 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=69 name=(null) inode=13633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=70 name=(null) inode=13629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=71 name=(null) inode=13634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=72 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=73 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=74 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=75 name=(null) inode=13636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=76 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=77 name=(null) inode=13637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=78 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=79 name=(null) inode=13638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=80 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=81 name=(null) inode=13639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=82 name=(null) inode=13635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=83 name=(null) inode=13640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=84 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=85 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=86 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=87 name=(null) inode=13642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=88 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=89 name=(null) inode=13643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=90 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=91 name=(null) inode=13644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=92 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=93 name=(null) inode=13645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=94 name=(null) inode=13641 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=95 name=(null) inode=13646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=96 name=(null) inode=13626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=97 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=98 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=99 name=(null) inode=13648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=100 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=101 name=(null) inode=13649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=102 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=103 name=(null) inode=13650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=104 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=105 name=(null) inode=13651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=106 name=(null) inode=13647 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=107 name=(null) inode=13652 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PATH item=109 name=(null) inode=13655 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.860000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:41:35.912343 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:41:35.948333 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:41:35.958225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:35.993332 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:41:36.122329 kernel: EDAC MC: Ver: 3.0.0 May 17 00:41:36.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.147020 systemd[1]: Finished systemd-udev-settle.service. May 17 00:41:36.149339 systemd[1]: Starting lvm2-activation-early.service... May 17 00:41:36.171929 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.204568 systemd[1]: Finished lvm2-activation-early.service. May 17 00:41:36.205169 systemd[1]: Reached target cryptsetup.target. May 17 00:41:36.207783 systemd[1]: Starting lvm2-activation.service... May 17 00:41:36.215060 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:36.245164 systemd[1]: Finished lvm2-activation.service. May 17 00:41:36.245917 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:36.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.248520 systemd[1]: Mounting media-configdrive.mount... May 17 00:41:36.249005 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:41:36.249092 systemd[1]: Reached target machines.target. May 17 00:41:36.252991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:41:36.276051 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:41:36.272458 systemd[1]: Mounted media-configdrive.mount. May 17 00:41:36.272978 systemd[1]: Reached target local-fs.target. May 17 00:41:36.275592 systemd[1]: Starting ldconfig.service... May 17 00:41:36.276684 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:36.276766 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:36.280361 systemd[1]: Starting systemd-boot-update.service... May 17 00:41:36.284973 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:41:36.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.290383 systemd[1]: Starting systemd-sysext.service... May 17 00:41:36.292092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:41:36.293284 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) May 17 00:41:36.295284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:41:36.326356 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:41:36.339099 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:41:36.339473 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:41:36.370104 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:41:36.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.380288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:41:36.381272 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:41:36.407662 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:41:36.433367 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:41:36.449941 (sd-sysext)[1121]: Using extensions 'kubernetes'. May 17 00:41:36.450599 (sd-sysext)[1121]: Merged extensions into '/usr'. May 17 00:41:36.453981 systemd-fsck[1117]: fsck.fat 4.2 (2021-01-31) May 17 00:41:36.453981 systemd-fsck[1117]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:41:36.478874 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:41:36.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.482481 systemd[1]: Mounting boot.mount... May 17 00:41:36.483427 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:36.489281 systemd[1]: Mounting usr-share-oem.mount... May 17 00:41:36.493075 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:36.496804 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:36.501698 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:36.507164 systemd[1]: Starting modprobe@loop.service... May 17 00:41:36.508855 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:36.509161 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:36.509459 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:36.527983 systemd[1]: Mounted boot.mount. May 17 00:41:36.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.531208 systemd[1]: Mounted usr-share-oem.mount. May 17 00:41:36.534872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:36.535166 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:36.537466 systemd[1]: Finished systemd-sysext.service. May 17 00:41:36.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.548275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:36.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.548584 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:36.549693 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:36.549986 systemd[1]: Finished modprobe@loop.service. May 17 00:41:36.559877 systemd[1]: Starting ensure-sysext.service... May 17 00:41:36.565566 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:36.565706 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:36.568201 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:41:36.582718 systemd[1]: Reloading. May 17 00:41:36.634672 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:41:36.638040 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:41:36.642239 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:41:36.770000 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-05-17T00:41:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:36.770030 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-05-17T00:41:36Z" level=info msg="torcx already run" May 17 00:41:36.791832 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:41:36.923698 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:36.923725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:36.951131 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:37.024562 systemd[1]: Finished ldconfig.service. May 17 00:41:37.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.025881 systemd[1]: Finished systemd-boot-update.service. May 17 00:41:37.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.028182 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:41:37.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.033022 systemd[1]: Starting audit-rules.service... May 17 00:41:37.035940 systemd[1]: Starting clean-ca-certificates.service... May 17 00:41:37.040173 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:41:37.048281 systemd[1]: Starting systemd-resolved.service... May 17 00:41:37.054348 systemd[1]: Starting systemd-timesyncd.service... May 17 00:41:37.059550 systemd[1]: Starting systemd-update-utmp.service... May 17 00:41:37.062202 systemd-networkd[1062]: eth0: Gained IPv6LL May 17 00:41:37.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.066041 systemd[1]: Finished clean-ca-certificates.service. May 17 00:41:37.071783 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.077611 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.078049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:37.080443 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:37.083359 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:37.088446 systemd[1]: Starting modprobe@loop.service... May 17 00:41:37.088992 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:37.089209 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.094000 audit[1222]: SYSTEM_BOOT pid=1222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:41:37.092033 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.092219 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.093746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:37.094018 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:37.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.101689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:37.101949 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:37.103153 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:37.106446 systemd[1]: Finished systemd-update-utmp.service. May 17 00:41:37.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.111732 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:37.112078 systemd[1]: Finished modprobe@loop.service. May 17 00:41:37.117757 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.118141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:37.125819 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:37.133565 systemd[1]: Starting modprobe@drm.service... May 17 00:41:37.136951 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:37.140815 systemd[1]: Starting modprobe@loop.service... May 17 00:41:37.141860 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:37.142166 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.147593 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:41:37.157417 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.157744 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.161542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:37.161901 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:37.163387 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:37.163621 systemd[1]: Finished modprobe@drm.service. May 17 00:41:37.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.166641 systemd[1]: Finished ensure-sysext.service. May 17 00:41:37.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.173074 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:37.173434 systemd[1]: Finished modprobe@loop.service. May 17 00:41:37.174063 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:37.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.176532 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:41:37.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.189449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:37.189743 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:37.190306 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:37.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.214577 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:41:37.218085 systemd[1]: Starting systemd-update-done.service... May 17 00:41:37.248953 systemd[1]: Finished systemd-update-done.service. May 17 00:41:37.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.265000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:41:37.265000 audit[1258]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce487a440 a2=420 a3=0 items=0 ppid=1215 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.265000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:41:37.265950 augenrules[1258]: No rules May 17 00:41:37.267395 systemd[1]: Finished audit-rules.service. May 17 00:41:37.301614 systemd-resolved[1219]: Positive Trust Anchors: May 17 00:41:37.301636 systemd-resolved[1219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:37.301683 systemd-resolved[1219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:37.309435 systemd[1]: Started systemd-timesyncd.service. May 17 00:41:37.309955 systemd-resolved[1219]: Using system hostname 'ci-3510.3.7-n-9c3fefbd06'. May 17 00:41:37.310096 systemd[1]: Reached target time-set.target. May 17 00:41:37.314644 systemd[1]: Started systemd-resolved.service. May 17 00:41:37.315265 systemd[1]: Reached target network.target. May 17 00:41:37.315684 systemd[1]: Reached target network-online.target. May 17 00:41:37.316041 systemd[1]: Reached target nss-lookup.target. May 17 00:41:37.316383 systemd[1]: Reached target sysinit.target. May 17 00:41:37.316840 systemd[1]: Started motdgen.path. May 17 00:41:37.317192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:41:37.317863 systemd[1]: Started logrotate.timer. May 17 00:41:37.318284 systemd[1]: Started mdadm.timer. May 17 00:41:37.318648 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:41:37.319046 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:41:37.319080 systemd[1]: Reached target paths.target. May 17 00:41:37.319412 systemd[1]: Reached target timers.target. May 17 00:41:37.320221 systemd[1]: Listening on dbus.socket. May 17 00:41:37.323243 systemd[1]: Starting docker.socket... May 17 00:41:37.326253 systemd[1]: Listening on sshd.socket. May 17 00:41:37.327116 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.328063 systemd[1]: Listening on docker.socket. May 17 00:41:37.328730 systemd[1]: Reached target sockets.target. May 17 00:41:37.329351 systemd[1]: Reached target basic.target. May 17 00:41:37.330108 systemd[1]: System is tainted: cgroupsv1 May 17 00:41:37.330357 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:37.330569 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:37.332692 systemd[1]: Starting containerd.service... May 17 00:41:37.335833 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:41:37.341851 systemd[1]: Starting dbus.service... May 17 00:41:37.348681 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:41:37.352962 systemd[1]: Starting extend-filesystems.service... May 17 00:41:37.354126 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:41:37.357819 systemd[1]: Starting kubelet.service... May 17 00:41:37.360971 systemd[1]: Starting motdgen.service... May 17 00:41:37.368673 systemd[1]: Starting prepare-helm.service... May 17 00:41:37.375420 jq[1274]: false May 17 00:41:37.373243 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:41:37.387830 systemd[1]: Starting sshd-keygen.service... May 17 00:41:37.391553 systemd[1]: Starting systemd-logind.service... May 17 00:41:37.394489 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.394601 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:41:37.396784 systemd[1]: Starting update-engine.service... May 17 00:41:37.401344 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:41:37.407648 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:41:37.466546 jq[1291]: true May 17 00:41:37.408125 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:41:37.446482 systemd-timesyncd[1220]: Contacted time server 216.177.181.129:123 (0.flatcar.pool.ntp.org). May 17 00:41:37.467778 tar[1294]: linux-amd64/helm May 17 00:41:37.447907 systemd-timesyncd[1220]: Initial clock synchronization to Sat 2025-05-17 00:41:37.645057 UTC. May 17 00:41:37.450831 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:41:37.451171 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:41:37.477805 jq[1299]: true May 17 00:41:37.495051 extend-filesystems[1275]: Found loop1 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda May 17 00:41:37.495051 extend-filesystems[1275]: Found vda1 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda2 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda3 May 17 00:41:37.495051 extend-filesystems[1275]: Found usr May 17 00:41:37.495051 extend-filesystems[1275]: Found vda4 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda6 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda7 May 17 00:41:37.495051 extend-filesystems[1275]: Found vda9 May 17 00:41:37.495051 extend-filesystems[1275]: Checking size of /dev/vda9 May 17 00:41:37.537228 dbus-daemon[1272]: [system] SELinux support is enabled May 17 00:41:37.537559 systemd[1]: Started dbus.service. May 17 00:41:37.541414 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:41:37.541838 systemd[1]: Finished motdgen.service. May 17 00:41:37.542772 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:41:37.542833 systemd[1]: Reached target system-config.target. May 17 00:41:37.543718 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:41:37.543757 systemd[1]: Reached target user-config.target. May 17 00:41:37.579610 extend-filesystems[1275]: Resized partition /dev/vda9 May 17 00:41:37.589967 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:41:37.598779 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:41:37.633032 update_engine[1288]: I0517 00:41:37.632475 1288 main.cc:92] Flatcar Update Engine starting May 17 00:41:37.657555 update_engine[1288]: I0517 00:41:37.638260 1288 update_check_scheduler.cc:74] Next update check in 2m39s May 17 00:41:37.635568 systemd-networkd[1062]: eth1: Gained IPv6LL May 17 00:41:37.638172 systemd[1]: Started update-engine.service. May 17 00:41:37.641170 systemd[1]: Started locksmithd.service. May 17 00:41:37.667882 bash[1333]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:37.669355 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:41:37.709388 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:41:37.726642 extend-filesystems[1330]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:41:37.726642 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:41:37.726642 extend-filesystems[1330]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:41:37.725659 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:41:37.733847 extend-filesystems[1275]: Resized filesystem in /dev/vda9 May 17 00:41:37.733847 extend-filesystems[1275]: Found vdb May 17 00:41:37.725998 systemd[1]: Finished extend-filesystems.service. May 17 00:41:37.759810 systemd-logind[1287]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:41:37.760362 systemd-logind[1287]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:41:37.762940 systemd-logind[1287]: New seat seat0. May 17 00:41:37.766892 systemd[1]: Started systemd-logind.service. May 17 00:41:37.779638 env[1298]: time="2025-05-17T00:41:37.778621120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:41:37.797334 coreos-metadata[1269]: May 17 00:41:37.796 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:37.814078 coreos-metadata[1269]: May 17 00:41:37.813 INFO Fetch successful May 17 00:41:37.826637 unknown[1269]: wrote ssh authorized keys file for user: core May 17 00:41:37.843850 update-ssh-keys[1341]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:37.844970 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:41:37.870544 env[1298]: time="2025-05-17T00:41:37.870453614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:41:37.870913 env[1298]: time="2025-05-17T00:41:37.870887927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.876435 env[1298]: time="2025-05-17T00:41:37.876371645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:37.876656 env[1298]: time="2025-05-17T00:41:37.876636483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.877088 env[1298]: time="2025-05-17T00:41:37.877062255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:37.877220 env[1298]: time="2025-05-17T00:41:37.877200066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.877332 env[1298]: time="2025-05-17T00:41:37.877313666Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:41:37.877405 env[1298]: time="2025-05-17T00:41:37.877390229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.877593 env[1298]: time="2025-05-17T00:41:37.877552688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.878062 env[1298]: time="2025-05-17T00:41:37.878030786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:37.878604 env[1298]: time="2025-05-17T00:41:37.878566057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:37.878730 env[1298]: time="2025-05-17T00:41:37.878706158Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:41:37.878916 env[1298]: time="2025-05-17T00:41:37.878890780Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:41:37.879026 env[1298]: time="2025-05-17T00:41:37.879005500Z" level=info msg="metadata content store policy set" policy=shared May 17 00:41:37.881327 env[1298]: time="2025-05-17T00:41:37.881064486Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:41:37.881327 env[1298]: time="2025-05-17T00:41:37.881121213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:41:37.881327 env[1298]: time="2025-05-17T00:41:37.881145719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:41:37.881327 env[1298]: time="2025-05-17T00:41:37.881211476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:41:37.881327 env[1298]: time="2025-05-17T00:41:37.881236588Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881778978Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881813716Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881839224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881861339Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881882518Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881901004Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.881926005Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882102153Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882223848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882795306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882846832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882871365Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882941457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:41:37.885358 env[1298]: time="2025-05-17T00:41:37.882965406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.882991006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883010191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883032626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883052317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883072746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883092043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883114038Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883360006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883390478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883411729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883430795Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883466885Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883485823Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:41:37.886089 env[1298]: time="2025-05-17T00:41:37.883516016Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:41:37.886705 env[1298]: time="2025-05-17T00:41:37.883567912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:41:37.886754 env[1298]: time="2025-05-17T00:41:37.883879534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:41:37.886754 env[1298]: time="2025-05-17T00:41:37.883970346Z" level=info msg="Connect containerd service" May 17 00:41:37.886754 env[1298]: time="2025-05-17T00:41:37.884035454Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:41:37.894868 env[1298]: time="2025-05-17T00:41:37.894715709Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:41:37.895315 env[1298]: time="2025-05-17T00:41:37.895207194Z" level=info msg="Start subscribing containerd event" May 17 00:41:37.895493 env[1298]: time="2025-05-17T00:41:37.895461930Z" level=info msg="Start recovering state" May 17 00:41:37.895840 env[1298]: time="2025-05-17T00:41:37.895816734Z" level=info msg="Start event monitor" May 17 00:41:37.896011 env[1298]: time="2025-05-17T00:41:37.895987529Z" level=info msg="Start snapshots syncer" May 17 00:41:37.896131 env[1298]: time="2025-05-17T00:41:37.896111453Z" level=info msg="Start cni network conf syncer for default" May 17 00:41:37.896235 env[1298]: time="2025-05-17T00:41:37.896217193Z" level=info msg="Start streaming server" May 17 00:41:37.897168 env[1298]: time="2025-05-17T00:41:37.897137279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:41:37.897459 env[1298]: time="2025-05-17T00:41:37.897424845Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:41:37.906943 systemd[1]: Started containerd.service. May 17 00:41:37.908385 env[1298]: time="2025-05-17T00:41:37.908331741Z" level=info msg="containerd successfully booted in 0.139939s" May 17 00:41:38.518794 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:41:38.707604 tar[1294]: linux-amd64/LICENSE May 17 00:41:38.708066 tar[1294]: linux-amd64/README.md May 17 00:41:38.717000 systemd[1]: Finished prepare-helm.service. May 17 00:41:38.964591 sshd_keygen[1314]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:41:38.997452 systemd[1]: Finished sshd-keygen.service. May 17 00:41:39.000218 systemd[1]: Starting issuegen.service... May 17 00:41:39.010264 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:41:39.010566 systemd[1]: Finished issuegen.service. May 17 00:41:39.013130 systemd[1]: Starting systemd-user-sessions.service... May 17 00:41:39.028830 systemd[1]: Finished systemd-user-sessions.service. May 17 00:41:39.031638 systemd[1]: Started getty@tty1.service. May 17 00:41:39.036120 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:41:39.039752 systemd[1]: Reached target getty.target. May 17 00:41:39.308536 systemd[1]: Started kubelet.service. May 17 00:41:39.309669 systemd[1]: Reached target multi-user.target. May 17 00:41:39.312581 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:41:39.324876 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:41:39.325166 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:41:39.336770 systemd[1]: Startup finished in 6.345s (kernel) + 8.164s (userspace) = 14.509s. May 17 00:41:39.587784 systemd[1]: Created slice system-sshd.slice. May 17 00:41:39.590426 systemd[1]: Started sshd@0-137.184.190.96:22-147.75.109.163:55810.service. May 17 00:41:39.663268 sshd[1386]: Accepted publickey for core from 147.75.109.163 port 55810 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:39.666462 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.679740 systemd[1]: Created slice user-500.slice. May 17 00:41:39.681808 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:41:39.693549 systemd-logind[1287]: New session 1 of user core. May 17 00:41:39.700983 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:41:39.703113 systemd[1]: Starting user@500.service... May 17 00:41:39.707992 (systemd)[1391]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.806376 systemd[1391]: Queued start job for default target default.target. May 17 00:41:39.806727 systemd[1391]: Reached target paths.target. May 17 00:41:39.806747 systemd[1391]: Reached target sockets.target. May 17 00:41:39.806761 systemd[1391]: Reached target timers.target. May 17 00:41:39.806774 systemd[1391]: Reached target basic.target. May 17 00:41:39.806829 systemd[1391]: Reached target default.target. May 17 00:41:39.806862 systemd[1391]: Startup finished in 89ms. May 17 00:41:39.806983 systemd[1]: Started user@500.service. May 17 00:41:39.808876 systemd[1]: Started session-1.scope. May 17 00:41:39.880276 systemd[1]: Started sshd@1-137.184.190.96:22-147.75.109.163:55816.service. May 17 00:41:39.950396 sshd[1400]: Accepted publickey for core from 147.75.109.163 port 55816 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:39.951726 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.960171 systemd-logind[1287]: New session 2 of user core. May 17 00:41:39.960860 systemd[1]: Started session-2.scope. May 17 00:41:40.033574 sshd[1400]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.036813 systemd[1]: sshd@1-137.184.190.96:22-147.75.109.163:55816.service: Deactivated successfully. May 17 00:41:40.038121 systemd-logind[1287]: Session 2 logged out. Waiting for processes to exit. May 17 00:41:40.040300 systemd[1]: Started sshd@2-137.184.190.96:22-147.75.109.163:55826.service. May 17 00:41:40.040970 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:41:40.041990 systemd-logind[1287]: Removed session 2. May 17 00:41:40.096124 sshd[1407]: Accepted publickey for core from 147.75.109.163 port 55826 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.098413 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.104390 systemd-logind[1287]: New session 3 of user core. May 17 00:41:40.105035 systemd[1]: Started session-3.scope. May 17 00:41:40.132917 kubelet[1377]: E0517 00:41:40.132729 1377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:40.136222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:40.136506 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:40.167537 sshd[1407]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.174171 systemd[1]: Started sshd@3-137.184.190.96:22-147.75.109.163:55832.service. May 17 00:41:40.175388 systemd[1]: sshd@2-137.184.190.96:22-147.75.109.163:55826.service: Deactivated successfully. May 17 00:41:40.177717 systemd-logind[1287]: Session 3 logged out. Waiting for processes to exit. May 17 00:41:40.177966 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:41:40.185275 systemd-logind[1287]: Removed session 3. May 17 00:41:40.233742 sshd[1413]: Accepted publickey for core from 147.75.109.163 port 55832 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.236168 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.242728 systemd-logind[1287]: New session 4 of user core. May 17 00:41:40.243040 systemd[1]: Started session-4.scope. May 17 00:41:40.313564 sshd[1413]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.319236 systemd[1]: Started sshd@4-137.184.190.96:22-147.75.109.163:55840.service. May 17 00:41:40.320436 systemd[1]: sshd@3-137.184.190.96:22-147.75.109.163:55832.service: Deactivated successfully. May 17 00:41:40.322046 systemd-logind[1287]: Session 4 logged out. Waiting for processes to exit. May 17 00:41:40.322440 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:41:40.323966 systemd-logind[1287]: Removed session 4. May 17 00:41:40.377031 sshd[1421]: Accepted publickey for core from 147.75.109.163 port 55840 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.379557 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.388148 systemd-logind[1287]: New session 5 of user core. May 17 00:41:40.389975 systemd[1]: Started session-5.scope. May 17 00:41:40.465919 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:41:40.466286 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:40.475620 dbus-daemon[1272]: \xd0\u001d\u0007\x99\xb2U: received setenforce notice (enforcing=1731939280) May 17 00:41:40.478305 sudo[1426]: pam_unix(sudo:session): session closed for user root May 17 00:41:40.483588 sshd[1421]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.487634 systemd[1]: Started sshd@5-137.184.190.96:22-147.75.109.163:55844.service. May 17 00:41:40.493636 systemd[1]: sshd@4-137.184.190.96:22-147.75.109.163:55840.service: Deactivated successfully. May 17 00:41:40.496042 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:41:40.497259 systemd-logind[1287]: Session 5 logged out. Waiting for processes to exit. May 17 00:41:40.499416 systemd-logind[1287]: Removed session 5. May 17 00:41:40.538537 sshd[1428]: Accepted publickey for core from 147.75.109.163 port 55844 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.541228 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.547577 systemd-logind[1287]: New session 6 of user core. May 17 00:41:40.548100 systemd[1]: Started session-6.scope. May 17 00:41:40.613408 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:41:40.614372 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:40.618606 sudo[1435]: pam_unix(sudo:session): session closed for user root May 17 00:41:40.626053 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:41:40.626497 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:40.639613 systemd[1]: Stopping audit-rules.service... May 17 00:41:40.641987 auditctl[1438]: No rules May 17 00:41:40.641000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:41:40.641000 audit[1438]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd6e5f640 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:40.641000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:41:40.642389 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:41:40.642728 systemd[1]: Stopped audit-rules.service. May 17 00:41:40.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:40.646066 systemd[1]: Starting audit-rules.service... May 17 00:41:40.676087 augenrules[1456]: No rules May 17 00:41:40.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:40.679000 audit[1434]: USER_END pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.679000 audit[1434]: CRED_DISP pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.679473 sudo[1434]: pam_unix(sudo:session): session closed for user root May 17 00:41:40.677894 systemd[1]: Finished audit-rules.service. May 17 00:41:40.682873 sshd[1428]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.190.96:22-147.75.109.163:55848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:40.688740 systemd[1]: Started sshd@6-137.184.190.96:22-147.75.109.163:55848.service. May 17 00:41:40.689000 audit[1428]: USER_END pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.689000 audit[1428]: CRED_DISP pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.692913 systemd[1]: sshd@5-137.184.190.96:22-147.75.109.163:55844.service: Deactivated successfully. May 17 00:41:40.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-137.184.190.96:22-147.75.109.163:55844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:40.694641 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:41:40.695268 systemd-logind[1287]: Session 6 logged out. Waiting for processes to exit. May 17 00:41:40.703428 systemd-logind[1287]: Removed session 6. May 17 00:41:40.742000 audit[1461]: USER_ACCT pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.744088 sshd[1461]: Accepted publickey for core from 147.75.109.163 port 55848 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.744000 audit[1461]: CRED_ACQ pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.744000 audit[1461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe84127880 a2=3 a3=0 items=0 ppid=1 pid=1461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:40.744000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:40.746432 sshd[1461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.753262 systemd-logind[1287]: New session 7 of user core. May 17 00:41:40.754378 systemd[1]: Started session-7.scope. May 17 00:41:40.763554 kernel: kauditd_printk_skb: 165 callbacks suppressed May 17 00:41:40.763685 kernel: audit: type=1105 audit(1747442500.761:179): pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.761000 audit[1461]: USER_START pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.767607 kernel: audit: type=1103 audit(1747442500.765:180): pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.765000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:41:40.823000 audit[1467]: USER_ACCT pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.825601 sudo[1467]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:41:40.825963 sudo[1467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:40.823000 audit[1467]: CRED_REFR pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.829479 kernel: audit: type=1101 audit(1747442500.823:181): pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.829606 kernel: audit: type=1110 audit(1747442500.823:182): pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.830000 audit[1467]: USER_START pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.836353 kernel: audit: type=1105 audit(1747442500.830:183): pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:41:40.871580 systemd[1]: Starting docker.service... May 17 00:41:40.935953 env[1477]: time="2025-05-17T00:41:40.935790404Z" level=info msg="Starting up" May 17 00:41:40.939896 env[1477]: time="2025-05-17T00:41:40.939839692Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:40.940082 env[1477]: time="2025-05-17T00:41:40.940063512Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:40.940163 env[1477]: time="2025-05-17T00:41:40.940144727Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:40.940236 env[1477]: time="2025-05-17T00:41:40.940218956Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:40.943551 env[1477]: time="2025-05-17T00:41:40.943507155Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:40.943551 env[1477]: time="2025-05-17T00:41:40.943535164Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:40.943829 env[1477]: time="2025-05-17T00:41:40.943553920Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:40.943829 env[1477]: time="2025-05-17T00:41:40.943579322Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:41.026179 env[1477]: time="2025-05-17T00:41:41.026126616Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:41:41.026505 env[1477]: time="2025-05-17T00:41:41.026477670Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:41:41.026909 env[1477]: time="2025-05-17T00:41:41.026878544Z" level=info msg="Loading containers: start." May 17 00:41:41.121000 audit[1508]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.121000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdbab39a80 a2=0 a3=7ffdbab39a6c items=0 ppid=1477 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.128585 kernel: audit: type=1325 audit(1747442501.121:184): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.128749 kernel: audit: type=1300 audit(1747442501.121:184): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdbab39a80 a2=0 a3=7ffdbab39a6c items=0 ppid=1477 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.121000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:41:41.130889 kernel: audit: type=1327 audit(1747442501.121:184): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:41:41.132000 audit[1510]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.132000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd72c1ea70 a2=0 a3=7ffd72c1ea5c items=0 ppid=1477 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.140382 kernel: audit: type=1325 audit(1747442501.132:185): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.140491 kernel: audit: type=1300 audit(1747442501.132:185): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd72c1ea70 a2=0 a3=7ffd72c1ea5c items=0 ppid=1477 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.132000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:41:41.135000 audit[1512]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.135000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea680a620 a2=0 a3=7ffea680a60c items=0 ppid=1477 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.135000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:41:41.142000 audit[1514]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.142000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff6830c450 a2=0 a3=7fff6830c43c items=0 ppid=1477 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:41:41.144000 audit[1516]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.144000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf0ef1dc0 a2=0 a3=7ffdf0ef1dac items=0 ppid=1477 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.144000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:41:41.166000 audit[1521]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.166000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed6881330 a2=0 a3=7ffed688131c items=0 ppid=1477 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.166000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:41:41.176000 audit[1523]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.176000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff394277c0 a2=0 a3=7fff394277ac items=0 ppid=1477 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.176000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:41:41.179000 audit[1525]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.179000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcd30f7e00 a2=0 a3=7ffcd30f7dec items=0 ppid=1477 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:41:41.183000 audit[1527]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.183000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcb2e41cc0 a2=0 a3=7ffcb2e41cac items=0 ppid=1477 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.183000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:41:41.193000 audit[1531]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.193000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffafd8a190 a2=0 a3=7fffafd8a17c items=0 ppid=1477 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.193000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:41:41.200000 audit[1532]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.200000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe152ee380 a2=0 a3=7ffe152ee36c items=0 ppid=1477 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.200000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:41:41.214522 kernel: Initializing XFRM netlink socket May 17 00:41:41.255420 env[1477]: time="2025-05-17T00:41:41.255365483Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:41:41.282000 audit[1541]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.282000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe0fac7df0 a2=0 a3=7ffe0fac7ddc items=0 ppid=1477 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:41:41.297000 audit[1544]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.297000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcd02eab70 a2=0 a3=7ffcd02eab5c items=0 ppid=1477 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.297000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:41:41.303000 audit[1547]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.303000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd37a2c860 a2=0 a3=7ffd37a2c84c items=0 ppid=1477 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.303000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:41:41.307000 audit[1549]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.307000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffd732c150 a2=0 a3=7fffd732c13c items=0 ppid=1477 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.307000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:41:41.312000 audit[1551]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.312000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd047d3670 a2=0 a3=7ffd047d365c items=0 ppid=1477 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:41:41.316000 audit[1553]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.316000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffccc6840b0 a2=0 a3=7ffccc68409c items=0 ppid=1477 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:41:41.319000 audit[1555]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.319000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc465690a0 a2=0 a3=7ffc4656908c items=0 ppid=1477 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.319000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:41:41.332000 audit[1558]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.332000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff77501a20 a2=0 a3=7fff77501a0c items=0 ppid=1477 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:41:41.335000 audit[1560]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.335000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd0b9eb3c0 a2=0 a3=7ffd0b9eb3ac items=0 ppid=1477 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.335000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:41:41.339000 audit[1562]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.339000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff886f4f70 a2=0 a3=7fff886f4f5c items=0 ppid=1477 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.339000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:41:41.342000 audit[1564]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.342000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd527f210 a2=0 a3=7ffdd527f1fc items=0 ppid=1477 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:41:41.344086 systemd-networkd[1062]: docker0: Link UP May 17 00:41:41.356000 audit[1568]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.356000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4733c450 a2=0 a3=7ffc4733c43c items=0 ppid=1477 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.356000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:41:41.363000 audit[1569]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:41:41.363000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffd9e08900 a2=0 a3=7fffd9e088ec items=0 ppid=1477 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:41.363000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:41:41.364592 env[1477]: time="2025-05-17T00:41:41.364546079Z" level=info msg="Loading containers: done." May 17 00:41:41.383198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2448062452-merged.mount: Deactivated successfully. May 17 00:41:41.388912 env[1477]: time="2025-05-17T00:41:41.388817685Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:41:41.389265 env[1477]: time="2025-05-17T00:41:41.389224081Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:41:41.389480 env[1477]: time="2025-05-17T00:41:41.389448058Z" level=info msg="Daemon has completed initialization" May 17 00:41:41.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:41.409532 systemd[1]: Started docker.service. May 17 00:41:41.415863 env[1477]: time="2025-05-17T00:41:41.415791852Z" level=info msg="API listen on /run/docker.sock" May 17 00:41:41.442549 systemd[1]: Starting coreos-metadata.service... May 17 00:41:41.494766 coreos-metadata[1594]: May 17 00:41:41.492 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:41.506808 coreos-metadata[1594]: May 17 00:41:41.506 INFO Fetch successful May 17 00:41:41.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:41.525075 systemd[1]: Finished coreos-metadata.service. May 17 00:41:42.526503 env[1298]: time="2025-05-17T00:41:42.526434119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:41:43.088899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436784036.mount: Deactivated successfully. May 17 00:41:44.825136 env[1298]: time="2025-05-17T00:41:44.825052684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:44.826686 env[1298]: time="2025-05-17T00:41:44.826634738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:44.831071 env[1298]: time="2025-05-17T00:41:44.829478187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:44.834432 env[1298]: time="2025-05-17T00:41:44.832958090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:44.834845 env[1298]: time="2025-05-17T00:41:44.834380180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:41:44.835861 env[1298]: time="2025-05-17T00:41:44.835811666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:41:47.071222 env[1298]: time="2025-05-17T00:41:47.071135051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.074237 env[1298]: time="2025-05-17T00:41:47.074180824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.076596 env[1298]: time="2025-05-17T00:41:47.076547825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.077861 env[1298]: time="2025-05-17T00:41:47.077800893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:41:47.082185 env[1298]: time="2025-05-17T00:41:47.082127652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:41:47.083061 env[1298]: time="2025-05-17T00:41:47.083015739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.545092 env[1298]: time="2025-05-17T00:41:48.545028828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.546749 env[1298]: time="2025-05-17T00:41:48.546706804Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.549452 env[1298]: time="2025-05-17T00:41:48.549400596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.552683 env[1298]: time="2025-05-17T00:41:48.552636150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.553728 env[1298]: time="2025-05-17T00:41:48.553666803Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:41:48.554483 env[1298]: time="2025-05-17T00:41:48.554440703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:41:49.687044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479486633.mount: Deactivated successfully. May 17 00:41:50.387399 kernel: kauditd_printk_skb: 69 callbacks suppressed May 17 00:41:50.387574 kernel: audit: type=1130 audit(1747442510.383:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.384939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:41:50.385182 systemd[1]: Stopped kubelet.service. May 17 00:41:50.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.388293 systemd[1]: Starting kubelet.service... May 17 00:41:50.390452 kernel: audit: type=1131 audit(1747442510.383:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.497602 env[1298]: time="2025-05-17T00:41:50.497520150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.498946 env[1298]: time="2025-05-17T00:41:50.498886355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.500092 env[1298]: time="2025-05-17T00:41:50.500043047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.503599 env[1298]: time="2025-05-17T00:41:50.503545138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:50.504387 env[1298]: time="2025-05-17T00:41:50.504334484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:41:50.505146 env[1298]: time="2025-05-17T00:41:50.505096931Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:41:50.599975 systemd[1]: Started kubelet.service. May 17 00:41:50.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.603316 kernel: audit: type=1130 audit(1747442510.599:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.701761 kubelet[1624]: E0517 00:41:50.701630 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:50.705967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:50.706329 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:50.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:41:50.712587 kernel: audit: type=1131 audit(1747442510.706:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:41:51.019934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873684655.mount: Deactivated successfully. May 17 00:41:52.116432 env[1298]: time="2025-05-17T00:41:52.116369491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.119139 env[1298]: time="2025-05-17T00:41:52.119086838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.122544 env[1298]: time="2025-05-17T00:41:52.122493174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.131641 env[1298]: time="2025-05-17T00:41:52.131585807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.132091 env[1298]: time="2025-05-17T00:41:52.132054276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:41:52.133484 env[1298]: time="2025-05-17T00:41:52.133438798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:41:52.562322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647779100.mount: Deactivated successfully. May 17 00:41:52.567519 env[1298]: time="2025-05-17T00:41:52.567460984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.568909 env[1298]: time="2025-05-17T00:41:52.568869967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.570674 env[1298]: time="2025-05-17T00:41:52.570636036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.572516 env[1298]: time="2025-05-17T00:41:52.572476972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.573245 env[1298]: time="2025-05-17T00:41:52.573184369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:41:52.575039 env[1298]: time="2025-05-17T00:41:52.574897051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:41:53.084927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152231404.mount: Deactivated successfully. May 17 00:41:55.894894 env[1298]: time="2025-05-17T00:41:55.894814789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:55.897482 env[1298]: time="2025-05-17T00:41:55.897428193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:55.900401 env[1298]: time="2025-05-17T00:41:55.900355849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:55.902674 env[1298]: time="2025-05-17T00:41:55.902623492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:55.903645 env[1298]: time="2025-05-17T00:41:55.903602450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:41:59.045243 systemd[1]: Stopped kubelet.service. May 17 00:41:59.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.050395 kernel: audit: type=1130 audit(1747442519.045:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.048039 systemd[1]: Starting kubelet.service... May 17 00:41:59.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.056323 kernel: audit: type=1131 audit(1747442519.045:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.090572 systemd[1]: Reloading. May 17 00:41:59.238223 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2025-05-17T00:41:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:59.238845 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2025-05-17T00:41:59Z" level=info msg="torcx already run" May 17 00:41:59.352183 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:59.352213 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:59.381396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:59.488426 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:41:59.488537 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:41:59.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:41:59.488966 systemd[1]: Stopped kubelet.service. May 17 00:41:59.492426 kernel: audit: type=1130 audit(1747442519.487:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:41:59.492379 systemd[1]: Starting kubelet.service... May 17 00:41:59.639543 systemd[1]: Started kubelet.service. May 17 00:41:59.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.644374 kernel: audit: type=1130 audit(1747442519.638:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.710227 kubelet[1741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:59.710760 kubelet[1741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:41:59.710855 kubelet[1741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:41:59.711336 kubelet[1741]: I0517 00:41:59.711253 1741 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:00.179400 kubelet[1741]: I0517 00:42:00.179343 1741 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:42:00.179660 kubelet[1741]: I0517 00:42:00.179636 1741 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:00.180221 kubelet[1741]: I0517 00:42:00.180185 1741 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:42:00.223031 kubelet[1741]: E0517 00:42:00.222969 1741 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.190.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:00.223530 kubelet[1741]: I0517 00:42:00.223488 1741 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:00.239542 kubelet[1741]: E0517 00:42:00.239469 1741 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:00.239542 kubelet[1741]: I0517 00:42:00.239531 1741 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:00.245056 kubelet[1741]: I0517 00:42:00.245005 1741 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:00.246610 kubelet[1741]: I0517 00:42:00.246541 1741 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:42:00.246857 kubelet[1741]: I0517 00:42:00.246788 1741 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:00.247213 kubelet[1741]: I0517 00:42:00.246861 1741 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-9c3fefbd06","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:42:00.247391 kubelet[1741]: I0517 00:42:00.247232 1741 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:00.247391 kubelet[1741]: I0517 00:42:00.247249 1741 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:42:00.247487 kubelet[1741]: I0517 00:42:00.247468 1741 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:00.259011 kubelet[1741]: W0517 00:42:00.258925 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.190.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-9c3fefbd06&limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:00.259600 kubelet[1741]: I0517 00:42:00.259460 1741 kubelet.go:408] "Attempting to sync node with API server" May 17 00:42:00.259729 kubelet[1741]: I0517 00:42:00.259716 1741 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:00.259874 kubelet[1741]: I0517 00:42:00.259857 1741 kubelet.go:314] "Adding apiserver pod source" May 17 00:42:00.260140 kubelet[1741]: I0517 00:42:00.260115 1741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:00.260520 kubelet[1741]: E0517 00:42:00.260485 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.190.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-9c3fefbd06&limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:00.263391 kubelet[1741]: W0517 00:42:00.263318 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.190.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:00.263649 kubelet[1741]: E0517 00:42:00.263616 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.190.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:00.263872 kubelet[1741]: I0517 00:42:00.263853 1741 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:00.264709 kubelet[1741]: I0517 00:42:00.264681 1741 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:42:00.264925 kubelet[1741]: W0517 00:42:00.264909 1741 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:42:00.271702 kubelet[1741]: I0517 00:42:00.271655 1741 server.go:1274] "Started kubelet" May 17 00:42:00.281480 kubelet[1741]: I0517 00:42:00.281394 1741 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:00.282689 kubelet[1741]: I0517 00:42:00.282499 1741 server.go:449] "Adding debug handlers to kubelet server" May 17 00:42:00.298599 kernel: audit: type=1400 audit(1747442520.287:218): avc: denied { mac_admin } for pid=1741 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:00.298786 kernel: audit: type=1401 audit(1747442520.287:218): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:00.298836 kernel: audit: type=1300 audit(1747442520.287:218): arch=c000003e syscall=188 success=no exit=-22 a0=c000b3e240 a1=c0008e3398 a2=c000b3e210 a3=25 items=0 ppid=1 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.287000 audit[1741]: AVC avc: denied { mac_admin } for pid=1741 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:00.287000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:00.287000 audit[1741]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b3e240 a1=c0008e3398 a2=c000b3e210 a3=25 items=0 ppid=1 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.299212 kubelet[1741]: I0517 00:42:00.288612 1741 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:42:00.299212 kubelet[1741]: I0517 00:42:00.288673 1741 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:42:00.299212 kubelet[1741]: I0517 00:42:00.288743 1741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:00.299212 kubelet[1741]: I0517 00:42:00.296230 1741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:00.299212 kubelet[1741]: I0517 00:42:00.298841 1741 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:42:00.299212 kubelet[1741]: E0517 00:42:00.299208 1741 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-9c3fefbd06\" not found" May 17 00:42:00.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:00.304557 kernel: audit: type=1327 audit(1747442520.287:218): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:00.304648 kubelet[1741]: I0517 00:42:00.304116 1741 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:42:00.304648 kubelet[1741]: I0517 00:42:00.304209 1741 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:00.287000 audit[1741]: AVC avc: denied { mac_admin } for pid=1741 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:00.308458 kernel: audit: type=1400 audit(1747442520.287:219): avc: denied { mac_admin } for pid=1741 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:00.309143 kubelet[1741]: I0517 00:42:00.309086 1741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:00.310326 kubelet[1741]: I0517 00:42:00.310288 1741 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:00.310861 kubelet[1741]: E0517 00:42:00.310829 1741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-9c3fefbd06?timeout=10s\": dial tcp 137.184.190.96:6443: connect: connection refused" interval="200ms" May 17 00:42:00.287000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:00.313957 kernel: audit: type=1401 audit(1747442520.287:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:00.287000 audit[1741]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a10b60 a1=c0008e33b0 a2=c000b3e2d0 a3=25 items=0 ppid=1 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:00.291000 audit[1752]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.291000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdbc1cdec0 a2=0 a3=7ffdbc1cdeac items=0 ppid=1741 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:42:00.293000 audit[1753]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1753 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.293000 audit[1753]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd61706d0 a2=0 a3=7fffd61706bc items=0 ppid=1741 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:42:00.300000 audit[1755]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.300000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd40402d10 a2=0 a3=7ffd40402cfc items=0 ppid=1741 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:00.304000 audit[1757]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.304000 audit[1757]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff86342ed0 a2=0 a3=7fff86342ebc items=0 ppid=1741 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:00.316402 kubelet[1741]: W0517 00:42:00.316349 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.190.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:00.317375 kubelet[1741]: E0517 00:42:00.316693 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.190.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:00.317604 kubelet[1741]: I0517 00:42:00.317578 1741 factory.go:221] Registration of the systemd container factory successfully May 17 00:42:00.317756 kubelet[1741]: E0517 00:42:00.314672 1741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.190.96:6443/api/v1/namespaces/default/events\": dial tcp 137.184.190.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-9c3fefbd06.184029b6be33b69a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-9c3fefbd06,UID:ci-3510.3.7-n-9c3fefbd06,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-9c3fefbd06,},FirstTimestamp:2025-05-17 00:42:00.271599258 +0000 UTC m=+0.613039297,LastTimestamp:2025-05-17 00:42:00.271599258 +0000 UTC m=+0.613039297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-9c3fefbd06,}" May 17 00:42:00.317996 kubelet[1741]: I0517 00:42:00.317734 1741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:00.323535 kubelet[1741]: E0517 00:42:00.323493 1741 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:00.325903 kubelet[1741]: I0517 00:42:00.325877 1741 factory.go:221] Registration of the containerd container factory successfully May 17 00:42:00.336000 audit[1763]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.336000 audit[1763]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff6220c380 a2=0 a3=7fff6220c36c items=0 ppid=1741 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:42:00.340567 kubelet[1741]: I0517 00:42:00.340508 1741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:42:00.341000 audit[1765]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:00.341000 audit[1765]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcd51a9f60 a2=0 a3=7ffcd51a9f4c items=0 ppid=1741 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:42:00.344000 audit[1766]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.344000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdc1ccf90 a2=0 a3=7ffcdc1ccf7c items=0 ppid=1741 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.345956 kubelet[1741]: I0517 00:42:00.345845 1741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:42:00.345956 kubelet[1741]: I0517 00:42:00.345889 1741 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:42:00.345956 kubelet[1741]: I0517 00:42:00.345914 1741 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:42:00.346109 kubelet[1741]: E0517 00:42:00.345996 1741 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:00.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:42:00.346000 audit[1767]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:00.346000 audit[1767]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb248dfc0 a2=0 a3=7ffcb248dfac items=0 ppid=1741 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:42:00.349000 audit[1768]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:00.349000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff785d2720 a2=0 a3=7fff785d270c items=0 ppid=1741 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:42:00.352816 kubelet[1741]: W0517 00:42:00.352752 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.190.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:00.352934 kubelet[1741]: E0517 00:42:00.352841 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.190.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:00.351000 audit[1769]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.351000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc958d730 a2=0 a3=7fffc958d71c items=0 ppid=1741 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:42:00.353000 audit[1770]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:00.353000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd51665410 a2=0 a3=7ffd516653fc items=0 ppid=1741 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:42:00.354000 audit[1771]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:00.354000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcfa6e040 a2=0 a3=7ffdcfa6e02c items=0 ppid=1741 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:42:00.367923 kubelet[1741]: I0517 00:42:00.367872 1741 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:42:00.367923 kubelet[1741]: I0517 00:42:00.367892 1741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:42:00.367923 kubelet[1741]: I0517 00:42:00.367917 1741 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:00.369925 kubelet[1741]: I0517 00:42:00.369881 1741 policy_none.go:49] "None policy: Start" May 17 00:42:00.370883 kubelet[1741]: I0517 00:42:00.370799 1741 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:42:00.370883 kubelet[1741]: I0517 00:42:00.370853 1741 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:00.375604 kubelet[1741]: I0517 00:42:00.375529 1741 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:42:00.374000 audit[1741]: AVC avc: denied { mac_admin } for pid=1741 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:00.374000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:00.374000 audit[1741]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000632600 a1=c000023a58 a2=c0006325d0 a3=25 items=0 ppid=1 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:00.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:00.376015 kubelet[1741]: I0517 00:42:00.375613 1741 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:42:00.376015 kubelet[1741]: I0517 00:42:00.375779 1741 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:00.376015 kubelet[1741]: I0517 00:42:00.375796 1741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:00.377488 kubelet[1741]: I0517 00:42:00.377452 1741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:00.381900 kubelet[1741]: E0517 00:42:00.381864 1741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-9c3fefbd06\" not found" May 17 00:42:00.480768 kubelet[1741]: I0517 00:42:00.477555 1741 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.481682 kubelet[1741]: E0517 00:42:00.481643 1741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.190.96:6443/api/v1/nodes\": dial tcp 137.184.190.96:6443: connect: connection refused" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.512658 kubelet[1741]: E0517 00:42:00.512602 1741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-9c3fefbd06?timeout=10s\": dial tcp 137.184.190.96:6443: connect: connection refused" interval="400ms" May 17 00:42:00.605019 kubelet[1741]: I0517 00:42:00.604970 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605268 kubelet[1741]: I0517 00:42:00.605250 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605445 kubelet[1741]: I0517 00:42:00.605423 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605572 kubelet[1741]: I0517 00:42:00.605557 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605680 kubelet[1741]: I0517 00:42:00.605666 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605801 kubelet[1741]: I0517 00:42:00.605785 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.605902 kubelet[1741]: I0517 00:42:00.605889 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0321c52d05c0cf2ece690ef7b168b26-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-9c3fefbd06\" (UID: \"b0321c52d05c0cf2ece690ef7b168b26\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.606568 kubelet[1741]: I0517 00:42:00.605969 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.606714 kubelet[1741]: I0517 00:42:00.606698 1741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.683584 kubelet[1741]: I0517 00:42:00.683529 1741 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.684023 kubelet[1741]: E0517 00:42:00.683964 1741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.190.96:6443/api/v1/nodes\": dial tcp 137.184.190.96:6443: connect: connection refused" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:00.752970 kubelet[1741]: E0517 00:42:00.752709 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.754155 env[1298]: time="2025-05-17T00:42:00.753887628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-9c3fefbd06,Uid:18034f2fa02f0ff8ab0cddf95ebf05b1,Namespace:kube-system,Attempt:0,}" May 17 00:42:00.762980 kubelet[1741]: E0517 00:42:00.762907 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.763861 env[1298]: time="2025-05-17T00:42:00.763780351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-9c3fefbd06,Uid:04a9884ddf305183c8750728479b7cae,Namespace:kube-system,Attempt:0,}" May 17 00:42:00.766047 kubelet[1741]: E0517 00:42:00.766017 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:00.767750 env[1298]: time="2025-05-17T00:42:00.767028514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-9c3fefbd06,Uid:b0321c52d05c0cf2ece690ef7b168b26,Namespace:kube-system,Attempt:0,}" May 17 00:42:00.914147 kubelet[1741]: E0517 00:42:00.914079 1741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-9c3fefbd06?timeout=10s\": dial tcp 137.184.190.96:6443: connect: connection refused" interval="800ms" May 17 00:42:01.086131 kubelet[1741]: I0517 00:42:01.085915 1741 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:01.086925 kubelet[1741]: E0517 00:42:01.086876 1741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.190.96:6443/api/v1/nodes\": dial tcp 137.184.190.96:6443: connect: connection refused" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:01.268012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963618672.mount: Deactivated successfully. May 17 00:42:01.276951 env[1298]: time="2025-05-17T00:42:01.275617337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.280529 env[1298]: time="2025-05-17T00:42:01.280468389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.283114 env[1298]: time="2025-05-17T00:42:01.283034377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.284593 env[1298]: time="2025-05-17T00:42:01.284521017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.288091 env[1298]: time="2025-05-17T00:42:01.288023881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.290701 env[1298]: time="2025-05-17T00:42:01.290638528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.293149 env[1298]: time="2025-05-17T00:42:01.293086226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.298344 env[1298]: time="2025-05-17T00:42:01.298259955Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.302752 env[1298]: time="2025-05-17T00:42:01.302692094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.306606 env[1298]: time="2025-05-17T00:42:01.306558986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.307870 env[1298]: time="2025-05-17T00:42:01.307813997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.311812 env[1298]: time="2025-05-17T00:42:01.311754015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:01.368432 env[1298]: time="2025-05-17T00:42:01.368195877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:01.368432 env[1298]: time="2025-05-17T00:42:01.368271273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:01.369994 env[1298]: time="2025-05-17T00:42:01.368306242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:01.369994 env[1298]: time="2025-05-17T00:42:01.369416181Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3145b89ac592219658583766eb687fce57408360d7189546559e389b67aa3992 pid=1785 runtime=io.containerd.runc.v2 May 17 00:42:01.375120 env[1298]: time="2025-05-17T00:42:01.374756947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:01.375120 env[1298]: time="2025-05-17T00:42:01.374849013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:01.375120 env[1298]: time="2025-05-17T00:42:01.374865356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:01.375805 env[1298]: time="2025-05-17T00:42:01.375628612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/208cee4541254e72d1e84b394bef6bea1ce0109a02901c97a58aadbd8e872b92 pid=1798 runtime=io.containerd.runc.v2 May 17 00:42:01.376909 env[1298]: time="2025-05-17T00:42:01.376561058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:01.376909 env[1298]: time="2025-05-17T00:42:01.376624905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:01.376909 env[1298]: time="2025-05-17T00:42:01.376641003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:01.377532 env[1298]: time="2025-05-17T00:42:01.377409522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c0ef938590cfeee9fe033da9b87aa06290014cf22106e74d87cebcbe80335ea pid=1799 runtime=io.containerd.runc.v2 May 17 00:42:01.382393 kubelet[1741]: W0517 00:42:01.382205 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.190.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:01.382393 kubelet[1741]: E0517 00:42:01.382320 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.190.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:01.523531 env[1298]: time="2025-05-17T00:42:01.523463956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-9c3fefbd06,Uid:04a9884ddf305183c8750728479b7cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"3145b89ac592219658583766eb687fce57408360d7189546559e389b67aa3992\"" May 17 00:42:01.525919 kubelet[1741]: E0517 00:42:01.525569 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:01.531660 env[1298]: time="2025-05-17T00:42:01.531598924Z" level=info msg="CreateContainer within sandbox \"3145b89ac592219658583766eb687fce57408360d7189546559e389b67aa3992\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:42:01.544573 env[1298]: time="2025-05-17T00:42:01.544500821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-9c3fefbd06,Uid:18034f2fa02f0ff8ab0cddf95ebf05b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"208cee4541254e72d1e84b394bef6bea1ce0109a02901c97a58aadbd8e872b92\"" May 17 00:42:01.546480 kubelet[1741]: E0517 00:42:01.546409 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:01.550987 env[1298]: time="2025-05-17T00:42:01.550931067Z" level=info msg="CreateContainer within sandbox \"208cee4541254e72d1e84b394bef6bea1ce0109a02901c97a58aadbd8e872b92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:42:01.577128 env[1298]: time="2025-05-17T00:42:01.577036822Z" level=info msg="CreateContainer within sandbox \"3145b89ac592219658583766eb687fce57408360d7189546559e389b67aa3992\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea5916237d69dbcbfe3a20040c64ba03659f29a044b6aacc7e4234e402fdb9db\"" May 17 00:42:01.578702 env[1298]: time="2025-05-17T00:42:01.578566164Z" level=info msg="StartContainer for \"ea5916237d69dbcbfe3a20040c64ba03659f29a044b6aacc7e4234e402fdb9db\"" May 17 00:42:01.585217 env[1298]: time="2025-05-17T00:42:01.585145975Z" level=info msg="CreateContainer within sandbox \"208cee4541254e72d1e84b394bef6bea1ce0109a02901c97a58aadbd8e872b92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"795cfb9196903e1da9b62b69b5a0ae38d455d41ba9daa91dda1a99a7f9470240\"" May 17 00:42:01.586115 env[1298]: time="2025-05-17T00:42:01.586052278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-9c3fefbd06,Uid:b0321c52d05c0cf2ece690ef7b168b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c0ef938590cfeee9fe033da9b87aa06290014cf22106e74d87cebcbe80335ea\"" May 17 00:42:01.587600 env[1298]: time="2025-05-17T00:42:01.587540152Z" level=info msg="StartContainer for \"795cfb9196903e1da9b62b69b5a0ae38d455d41ba9daa91dda1a99a7f9470240\"" May 17 00:42:01.589116 kubelet[1741]: E0517 00:42:01.588873 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:01.597339 env[1298]: time="2025-05-17T00:42:01.597256113Z" level=info msg="CreateContainer within sandbox \"1c0ef938590cfeee9fe033da9b87aa06290014cf22106e74d87cebcbe80335ea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:42:01.617238 env[1298]: time="2025-05-17T00:42:01.617136902Z" level=info msg="CreateContainer within sandbox \"1c0ef938590cfeee9fe033da9b87aa06290014cf22106e74d87cebcbe80335ea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c592ccde87ae44b1074f4dce8430109bb5922e3c0cf7841e761b9c5f26038a2e\"" May 17 00:42:01.620041 env[1298]: time="2025-05-17T00:42:01.618946236Z" level=info msg="StartContainer for \"c592ccde87ae44b1074f4dce8430109bb5922e3c0cf7841e761b9c5f26038a2e\"" May 17 00:42:01.664477 kubelet[1741]: W0517 00:42:01.664250 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.190.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-9c3fefbd06&limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:01.664477 kubelet[1741]: E0517 00:42:01.664419 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.190.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-9c3fefbd06&limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:01.715592 kubelet[1741]: E0517 00:42:01.715516 1741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.190.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-9c3fefbd06?timeout=10s\": dial tcp 137.184.190.96:6443: connect: connection refused" interval="1.6s" May 17 00:42:01.796052 kubelet[1741]: W0517 00:42:01.795965 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.190.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:01.796707 kubelet[1741]: E0517 00:42:01.796659 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.190.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:01.802379 kubelet[1741]: W0517 00:42:01.802308 1741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.190.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.190.96:6443: connect: connection refused May 17 00:42:01.802719 kubelet[1741]: E0517 00:42:01.802676 1741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.190.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.190.96:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:01.809509 env[1298]: time="2025-05-17T00:42:01.809109056Z" level=info msg="StartContainer for \"ea5916237d69dbcbfe3a20040c64ba03659f29a044b6aacc7e4234e402fdb9db\" returns successfully" May 17 00:42:01.814534 env[1298]: time="2025-05-17T00:42:01.814463985Z" level=info msg="StartContainer for \"795cfb9196903e1da9b62b69b5a0ae38d455d41ba9daa91dda1a99a7f9470240\" returns successfully" May 17 00:42:01.859404 env[1298]: time="2025-05-17T00:42:01.859316583Z" level=info msg="StartContainer for \"c592ccde87ae44b1074f4dce8430109bb5922e3c0cf7841e761b9c5f26038a2e\" returns successfully" May 17 00:42:01.888804 kubelet[1741]: I0517 00:42:01.888752 1741 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:01.889365 kubelet[1741]: E0517 00:42:01.889306 1741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.190.96:6443/api/v1/nodes\": dial tcp 137.184.190.96:6443: connect: connection refused" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:02.365217 kubelet[1741]: E0517 00:42:02.365079 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:02.368013 kubelet[1741]: E0517 00:42:02.367956 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:02.372729 kubelet[1741]: E0517 00:42:02.372688 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:03.374002 kubelet[1741]: E0517 00:42:03.373956 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:03.491080 kubelet[1741]: I0517 00:42:03.491045 1741 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:04.425945 kubelet[1741]: E0517 00:42:04.425885 1741 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-9c3fefbd06\" not found" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:04.534657 kubelet[1741]: I0517 00:42:04.534604 1741 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:05.265309 kubelet[1741]: I0517 00:42:05.265192 1741 apiserver.go:52] "Watching apiserver" May 17 00:42:05.305189 kubelet[1741]: I0517 00:42:05.305127 1741 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:42:07.122080 systemd[1]: Reloading. May 17 00:42:07.259492 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-05-17T00:42:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:07.261450 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-05-17T00:42:07Z" level=info msg="torcx already run" May 17 00:42:07.440675 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:07.441403 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:07.496256 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:07.600047 kubelet[1741]: W0517 00:42:07.599996 1741 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:07.600597 kubelet[1741]: E0517 00:42:07.600441 1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:07.675388 systemd[1]: Stopping kubelet.service... May 17 00:42:07.700091 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:07.700899 systemd[1]: Stopped kubelet.service. May 17 00:42:07.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:07.702031 kernel: kauditd_printk_skb: 42 callbacks suppressed May 17 00:42:07.702152 kernel: audit: type=1131 audit(1747442527.700:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:07.706126 systemd[1]: Starting kubelet.service... May 17 00:42:08.903477 systemd[1]: Started kubelet.service. May 17 00:42:08.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:08.910355 kernel: audit: type=1130 audit(1747442528.905:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:09.097138 kubelet[2089]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:09.097138 kubelet[2089]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:42:09.097138 kubelet[2089]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:09.097138 kubelet[2089]: I0517 00:42:09.096876 2089 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:09.124167 kubelet[2089]: I0517 00:42:09.124099 2089 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:42:09.124167 kubelet[2089]: I0517 00:42:09.124156 2089 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:09.125359 kubelet[2089]: I0517 00:42:09.124807 2089 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:42:09.129852 kubelet[2089]: I0517 00:42:09.128735 2089 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:42:09.133895 kubelet[2089]: I0517 00:42:09.133219 2089 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:09.155319 kubelet[2089]: E0517 00:42:09.155150 2089 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:09.155592 kubelet[2089]: I0517 00:42:09.155565 2089 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:09.169828 kubelet[2089]: I0517 00:42:09.168541 2089 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:09.170782 kubelet[2089]: I0517 00:42:09.170739 2089 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:42:09.174274 kubelet[2089]: I0517 00:42:09.172597 2089 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:09.174274 kubelet[2089]: I0517 00:42:09.172698 2089 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-9c3fefbd06","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:42:09.174274 kubelet[2089]: I0517 00:42:09.173086 2089 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:09.174274 kubelet[2089]: I0517 00:42:09.173104 2089 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:42:09.177955 kubelet[2089]: I0517 00:42:09.173190 2089 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:09.177955 kubelet[2089]: I0517 00:42:09.173428 2089 kubelet.go:408] "Attempting to sync node with API server" May 17 00:42:09.177955 kubelet[2089]: I0517 00:42:09.173498 2089 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:09.177955 kubelet[2089]: I0517 00:42:09.174190 2089 kubelet.go:314] "Adding apiserver pod source" May 17 00:42:09.177955 kubelet[2089]: I0517 00:42:09.174375 2089 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:09.197082 kubelet[2089]: I0517 00:42:09.194882 2089 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:09.206699 kubelet[2089]: I0517 00:42:09.206625 2089 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:42:09.226197 kubelet[2089]: I0517 00:42:09.225931 2089 server.go:1274] "Started kubelet" May 17 00:42:09.227000 audit[2089]: AVC avc: denied { mac_admin } for pid=2089 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:09.227000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:09.232108 kernel: audit: type=1400 audit(1747442529.227:235): avc: denied { mac_admin } for pid=2089 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:09.232358 kernel: audit: type=1401 audit(1747442529.227:235): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:09.232603 kubelet[2089]: I0517 00:42:09.232531 2089 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:42:09.232884 kubelet[2089]: I0517 00:42:09.232854 2089 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:42:09.233049 kubelet[2089]: I0517 00:42:09.233032 2089 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:09.227000 audit[2089]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5e930 a1=c000cb8030 a2=c000b5e900 a3=25 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:09.238506 kernel: audit: type=1300 audit(1747442529.227:235): arch=c000003e syscall=188 success=no exit=-22 a0=c000b5e930 a1=c000cb8030 a2=c000b5e900 a3=25 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:09.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:09.248421 kernel: audit: type=1327 audit(1747442529.227:235): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:09.251667 kernel: audit: type=1400 audit(1747442529.232:236): avc: denied { mac_admin } for pid=2089 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:09.251833 kernel: audit: type=1401 audit(1747442529.232:236): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:09.251875 kernel: audit: type=1300 audit(1747442529.232:236): arch=c000003e syscall=188 success=no exit=-22 a0=c000cb6180 a1=c000cb8048 a2=c000b5e9c0 a3=25 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:09.232000 audit[2089]: AVC avc: denied { mac_admin } for pid=2089 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:09.232000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:09.232000 audit[2089]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cb6180 a1=c000cb8048 a2=c000b5e9c0 a3=25 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:09.259458 kernel: audit: type=1327 audit(1747442529.232:236): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:09.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:09.259734 kubelet[2089]: I0517 00:42:09.254549 2089 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:09.260159 kubelet[2089]: I0517 00:42:09.260122 2089 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:42:09.300332 kubelet[2089]: I0517 00:42:09.300210 2089 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:09.307058 kubelet[2089]: I0517 00:42:09.306450 2089 server.go:449] "Adding debug handlers to kubelet server" May 17 00:42:09.324889 kubelet[2089]: I0517 00:42:09.324795 2089 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:09.325380 kubelet[2089]: I0517 00:42:09.325349 2089 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:09.338381 kubelet[2089]: I0517 00:42:09.335639 2089 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:42:09.338381 kubelet[2089]: I0517 00:42:09.337103 2089 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:09.343809 kubelet[2089]: I0517 00:42:09.342648 2089 factory.go:221] Registration of the systemd container factory successfully May 17 00:42:09.344195 kubelet[2089]: I0517 00:42:09.344137 2089 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:09.349515 kubelet[2089]: I0517 00:42:09.348693 2089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:42:09.351126 kubelet[2089]: I0517 00:42:09.350701 2089 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:42:09.351126 kubelet[2089]: I0517 00:42:09.350748 2089 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:42:09.351126 kubelet[2089]: I0517 00:42:09.350778 2089 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:42:09.351126 kubelet[2089]: E0517 00:42:09.350848 2089 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:09.372029 kubelet[2089]: I0517 00:42:09.371991 2089 factory.go:221] Registration of the containerd container factory successfully May 17 00:42:09.396054 kubelet[2089]: E0517 00:42:09.395591 2089 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:09.451391 kubelet[2089]: E0517 00:42:09.451274 2089 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:42:09.509073 kubelet[2089]: I0517 00:42:09.509042 2089 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:42:09.509343 kubelet[2089]: I0517 00:42:09.509324 2089 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:42:09.509509 kubelet[2089]: I0517 00:42:09.509493 2089 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:09.510132 kubelet[2089]: I0517 00:42:09.510075 2089 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:42:09.510433 kubelet[2089]: I0517 00:42:09.510284 2089 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:42:09.510584 kubelet[2089]: I0517 00:42:09.510567 2089 policy_none.go:49] "None policy: Start" May 17 00:42:09.512633 kubelet[2089]: I0517 00:42:09.512602 2089 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:42:09.512890 kubelet[2089]: I0517 00:42:09.512871 2089 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:09.513478 kubelet[2089]: I0517 00:42:09.513453 2089 state_mem.go:75] "Updated machine memory state" May 17 00:42:09.515554 kubelet[2089]: I0517 00:42:09.515526 2089 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:42:09.515795 kubelet[2089]: I0517 00:42:09.515767 2089 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:42:09.516137 kubelet[2089]: I0517 00:42:09.516120 2089 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:09.515000 audit[2089]: AVC avc: denied { mac_admin } for pid=2089 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:09.515000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:09.515000 audit[2089]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001142c90 a1=c00100fa88 a2=c001142c60 a3=25 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:09.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:09.516729 kubelet[2089]: I0517 00:42:09.516266 2089 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:09.526024 kubelet[2089]: I0517 00:42:09.525987 2089 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:09.627708 kubelet[2089]: I0517 00:42:09.622380 2089 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.639789 kubelet[2089]: I0517 00:42:09.639015 2089 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.639789 kubelet[2089]: I0517 00:42:09.639132 2089 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.660965 kubelet[2089]: W0517 00:42:09.660393 2089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:09.660965 kubelet[2089]: W0517 00:42:09.660688 2089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:09.660965 kubelet[2089]: W0517 00:42:09.660820 2089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:09.660965 kubelet[2089]: E0517 00:42:09.660873 2089 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-9c3fefbd06\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.744407 kubelet[2089]: I0517 00:42:09.744247 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0321c52d05c0cf2ece690ef7b168b26-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-9c3fefbd06\" (UID: \"b0321c52d05c0cf2ece690ef7b168b26\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.744733 kubelet[2089]: I0517 00:42:09.744696 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.744881 kubelet[2089]: I0517 00:42:09.744865 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.744992 kubelet[2089]: I0517 00:42:09.744976 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.745096 kubelet[2089]: I0517 00:42:09.745078 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.745308 kubelet[2089]: I0517 00:42:09.745260 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.745465 kubelet[2089]: I0517 00:42:09.745447 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04a9884ddf305183c8750728479b7cae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" (UID: \"04a9884ddf305183c8750728479b7cae\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.745577 kubelet[2089]: I0517 00:42:09.745563 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.745686 kubelet[2089]: I0517 00:42:09.745672 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18034f2fa02f0ff8ab0cddf95ebf05b1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-9c3fefbd06\" (UID: \"18034f2fa02f0ff8ab0cddf95ebf05b1\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:09.961527 kubelet[2089]: E0517 00:42:09.961455 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:09.961823 kubelet[2089]: E0517 00:42:09.961790 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:09.962134 kubelet[2089]: E0517 00:42:09.961907 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.176334 kubelet[2089]: I0517 00:42:10.176276 2089 apiserver.go:52] "Watching apiserver" May 17 00:42:10.238055 kubelet[2089]: I0517 00:42:10.238003 2089 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:42:10.417201 kubelet[2089]: E0517 00:42:10.417168 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.417678 kubelet[2089]: E0517 00:42:10.417648 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.433672 kubelet[2089]: W0517 00:42:10.431466 2089 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:42:10.433672 kubelet[2089]: E0517 00:42:10.431565 2089 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-9c3fefbd06\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" May 17 00:42:10.433672 kubelet[2089]: E0517 00:42:10.431856 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:10.459394 kubelet[2089]: I0517 00:42:10.459313 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-9c3fefbd06" podStartSLOduration=1.4592642200000001 podStartE2EDuration="1.45926422s" podCreationTimestamp="2025-05-17 00:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:10.449935864 +0000 UTC m=+1.513750697" watchObservedRunningTime="2025-05-17 00:42:10.45926422 +0000 UTC m=+1.523079056" May 17 00:42:10.460212 kubelet[2089]: I0517 00:42:10.460171 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-9c3fefbd06" podStartSLOduration=3.4601539199999998 podStartE2EDuration="3.46015392s" podCreationTimestamp="2025-05-17 00:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:10.46002734 +0000 UTC m=+1.523842164" watchObservedRunningTime="2025-05-17 00:42:10.46015392 +0000 UTC m=+1.523968763" May 17 00:42:11.419857 kubelet[2089]: E0517 00:42:11.419806 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:11.420682 kubelet[2089]: E0517 00:42:11.420251 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:12.020534 kubelet[2089]: I0517 00:42:12.020484 2089 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:42:12.021132 env[1298]: time="2025-05-17T00:42:12.021080292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:42:12.022194 kubelet[2089]: I0517 00:42:12.022162 2089 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:42:12.218365 kubelet[2089]: I0517 00:42:12.218280 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-9c3fefbd06" podStartSLOduration=3.218251442 podStartE2EDuration="3.218251442s" podCreationTimestamp="2025-05-17 00:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:10.47130506 +0000 UTC m=+1.535119893" watchObservedRunningTime="2025-05-17 00:42:12.218251442 +0000 UTC m=+3.282066262" May 17 00:42:12.260767 kubelet[2089]: I0517 00:42:12.260714 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzg7h\" (UniqueName: \"kubernetes.io/projected/2b384543-78aa-49a9-acda-b2b4c7afefbc-kube-api-access-nzg7h\") pod \"kube-proxy-ns7sd\" (UID: \"2b384543-78aa-49a9-acda-b2b4c7afefbc\") " pod="kube-system/kube-proxy-ns7sd" May 17 00:42:12.260767 kubelet[2089]: I0517 00:42:12.260762 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b384543-78aa-49a9-acda-b2b4c7afefbc-kube-proxy\") pod \"kube-proxy-ns7sd\" (UID: \"2b384543-78aa-49a9-acda-b2b4c7afefbc\") " pod="kube-system/kube-proxy-ns7sd" May 17 00:42:12.260957 kubelet[2089]: I0517 00:42:12.260789 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b384543-78aa-49a9-acda-b2b4c7afefbc-xtables-lock\") pod \"kube-proxy-ns7sd\" (UID: \"2b384543-78aa-49a9-acda-b2b4c7afefbc\") " pod="kube-system/kube-proxy-ns7sd" May 17 00:42:12.260957 kubelet[2089]: I0517 00:42:12.260804 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b384543-78aa-49a9-acda-b2b4c7afefbc-lib-modules\") pod \"kube-proxy-ns7sd\" (UID: \"2b384543-78aa-49a9-acda-b2b4c7afefbc\") " pod="kube-system/kube-proxy-ns7sd" May 17 00:42:12.372477 kubelet[2089]: E0517 00:42:12.372333 2089 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:42:12.372740 kubelet[2089]: E0517 00:42:12.372718 2089 projected.go:194] Error preparing data for projected volume kube-api-access-nzg7h for pod kube-system/kube-proxy-ns7sd: configmap "kube-root-ca.crt" not found May 17 00:42:12.372933 kubelet[2089]: E0517 00:42:12.372916 2089 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b384543-78aa-49a9-acda-b2b4c7afefbc-kube-api-access-nzg7h podName:2b384543-78aa-49a9-acda-b2b4c7afefbc nodeName:}" failed. No retries permitted until 2025-05-17 00:42:12.872883082 +0000 UTC m=+3.936697917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nzg7h" (UniqueName: "kubernetes.io/projected/2b384543-78aa-49a9-acda-b2b4c7afefbc-kube-api-access-nzg7h") pod "kube-proxy-ns7sd" (UID: "2b384543-78aa-49a9-acda-b2b4c7afefbc") : configmap "kube-root-ca.crt" not found May 17 00:42:12.916531 kubelet[2089]: E0517 00:42:12.916490 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:12.965475 kubelet[2089]: I0517 00:42:12.965441 2089 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:42:13.124278 kubelet[2089]: E0517 00:42:13.124227 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:13.124942 env[1298]: time="2025-05-17T00:42:13.124899244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ns7sd,Uid:2b384543-78aa-49a9-acda-b2b4c7afefbc,Namespace:kube-system,Attempt:0,}" May 17 00:42:13.169803 env[1298]: time="2025-05-17T00:42:13.169437488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:13.170024 env[1298]: time="2025-05-17T00:42:13.169973192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:13.170131 env[1298]: time="2025-05-17T00:42:13.170108181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:13.170422 env[1298]: time="2025-05-17T00:42:13.170387869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3300a277fcef94db1eb6a9c03bae459b3660c807fab2866d671abd0c9eabc26 pid=2136 runtime=io.containerd.runc.v2 May 17 00:42:13.293124 env[1298]: time="2025-05-17T00:42:13.293053996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ns7sd,Uid:2b384543-78aa-49a9-acda-b2b4c7afefbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3300a277fcef94db1eb6a9c03bae459b3660c807fab2866d671abd0c9eabc26\"" May 17 00:42:13.294759 kubelet[2089]: E0517 00:42:13.294713 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:13.299974 env[1298]: time="2025-05-17T00:42:13.299904198Z" level=info msg="CreateContainer within sandbox \"f3300a277fcef94db1eb6a9c03bae459b3660c807fab2866d671abd0c9eabc26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:42:13.314089 env[1298]: time="2025-05-17T00:42:13.314021890Z" level=info msg="CreateContainer within sandbox \"f3300a277fcef94db1eb6a9c03bae459b3660c807fab2866d671abd0c9eabc26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9e9f0fcf0d8be5398ceee408637bd8f798274b32a5e34c142107d45dbd4a922d\"" May 17 00:42:13.321537 env[1298]: time="2025-05-17T00:42:13.321434397Z" level=info msg="StartContainer for \"9e9f0fcf0d8be5398ceee408637bd8f798274b32a5e34c142107d45dbd4a922d\"" May 17 00:42:13.368824 kubelet[2089]: I0517 00:42:13.368784 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07e19c5c-923a-488e-a73c-493bd8d2f65d-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-r5x8t\" (UID: \"07e19c5c-923a-488e-a73c-493bd8d2f65d\") " pod="tigera-operator/tigera-operator-7c5755cdcb-r5x8t" May 17 00:42:13.369207 kubelet[2089]: I0517 00:42:13.368849 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlck\" (UniqueName: \"kubernetes.io/projected/07e19c5c-923a-488e-a73c-493bd8d2f65d-kube-api-access-qzlck\") pod \"tigera-operator-7c5755cdcb-r5x8t\" (UID: \"07e19c5c-923a-488e-a73c-493bd8d2f65d\") " pod="tigera-operator/tigera-operator-7c5755cdcb-r5x8t" May 17 00:42:13.421456 env[1298]: time="2025-05-17T00:42:13.421238913Z" level=info msg="StartContainer for \"9e9f0fcf0d8be5398ceee408637bd8f798274b32a5e34c142107d45dbd4a922d\" returns successfully" May 17 00:42:13.436417 kubelet[2089]: E0517 00:42:13.435714 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:13.522756 env[1298]: time="2025-05-17T00:42:13.522665576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-r5x8t,Uid:07e19c5c-923a-488e-a73c-493bd8d2f65d,Namespace:tigera-operator,Attempt:0,}" May 17 00:42:13.551703 env[1298]: time="2025-05-17T00:42:13.551563549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:13.552058 env[1298]: time="2025-05-17T00:42:13.551988262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:13.552254 env[1298]: time="2025-05-17T00:42:13.552209325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:13.552720 env[1298]: time="2025-05-17T00:42:13.552668602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bea346a391fb662e98953367006418af6020a13de01cf117bcda96eacab47adc pid=2217 runtime=io.containerd.runc.v2 May 17 00:42:13.647874 env[1298]: time="2025-05-17T00:42:13.647810127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-r5x8t,Uid:07e19c5c-923a-488e-a73c-493bd8d2f65d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bea346a391fb662e98953367006418af6020a13de01cf117bcda96eacab47adc\"" May 17 00:42:13.652860 env[1298]: time="2025-05-17T00:42:13.650776469Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:42:13.707372 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:42:13.707539 kernel: audit: type=1325 audit(1747442533.704:238): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.704000 audit[2282]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.704000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe25192750 a2=0 a3=7ffe2519273c items=0 ppid=2190 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.712724 kernel: audit: type=1300 audit(1747442533.704:238): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe25192750 a2=0 a3=7ffe2519273c items=0 ppid=2190 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:13.718322 kernel: audit: type=1327 audit(1747442533.704:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:13.718000 audit[2283]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.718000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf9706b0 a2=0 a3=7ffecf97069c items=0 ppid=2190 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.726815 kernel: audit: type=1325 audit(1747442533.718:239): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.726970 kernel: audit: type=1300 audit(1747442533.718:239): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf9706b0 a2=0 a3=7ffecf97069c items=0 ppid=2190 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.727032 kernel: audit: type=1327 audit(1747442533.718:239): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:13.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:13.719000 audit[2284]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.731053 kernel: audit: type=1325 audit(1747442533.719:240): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.731182 kernel: audit: type=1300 audit(1747442533.719:240): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf7ecb9a0 a2=0 a3=7ffcf7ecb98c items=0 ppid=2190 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.719000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf7ecb9a0 a2=0 a3=7ffcf7ecb98c items=0 ppid=2190 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.734554 kernel: audit: type=1327 audit(1747442533.719:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:13.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:13.736651 kernel: audit: type=1325 audit(1747442533.721:241): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.721000 audit[2286]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.721000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d8aba40 a2=0 a3=7ffd9d8aba2c items=0 ppid=2190 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:42:13.726000 audit[2285]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.726000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdba1da50 a2=0 a3=7fffdba1da3c items=0 ppid=2190 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:13.728000 audit[2287]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.728000 audit[2287]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4185aa00 a2=0 a3=7ffe4185a9ec items=0 ppid=2190 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:42:13.810000 audit[2288]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.810000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcec7982b0 a2=0 a3=7ffcec79829c items=0 ppid=2190 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:42:13.820000 audit[2290]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2290 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.820000 audit[2290]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffccc10af40 a2=0 a3=7ffccc10af2c items=0 ppid=2190 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:42:13.829000 audit[2293]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.829000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd698f5500 a2=0 a3=7ffd698f54ec items=0 ppid=2190 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:42:13.831000 audit[2294]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.831000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe22546ec0 a2=0 a3=7ffe22546eac items=0 ppid=2190 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:42:13.836000 audit[2296]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.836000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe17f36c30 a2=0 a3=7ffe17f36c1c items=0 ppid=2190 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:42:13.839000 audit[2297]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.839000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa1b7df30 a2=0 a3=7fffa1b7df1c items=0 ppid=2190 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:42:13.845000 audit[2299]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.845000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff159dbd70 a2=0 a3=7fff159dbd5c items=0 ppid=2190 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:42:13.854000 audit[2302]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.854000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd08b53160 a2=0 a3=7ffd08b5314c items=0 ppid=2190 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:42:13.856000 audit[2303]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.856000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4e238b20 a2=0 a3=7ffd4e238b0c items=0 ppid=2190 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:42:13.862000 audit[2305]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.862000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe495e04f0 a2=0 a3=7ffe495e04dc items=0 ppid=2190 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:42:13.864000 audit[2306]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.864000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4c5f7570 a2=0 a3=7ffe4c5f755c items=0 ppid=2190 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:42:13.870000 audit[2308]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.870000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcc5e04c0 a2=0 a3=7ffdcc5e04ac items=0 ppid=2190 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:13.879000 audit[2311]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.879000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcb979050 a2=0 a3=7ffdcb97903c items=0 ppid=2190 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:13.888000 audit[2314]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.888000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff70834b80 a2=0 a3=7fff70834b6c items=0 ppid=2190 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:42:13.890000 audit[2315]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.890000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd86ceed40 a2=0 a3=7ffd86ceed2c items=0 ppid=2190 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:42:13.895000 audit[2317]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.895000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd8cd83a40 a2=0 a3=7ffd8cd83a2c items=0 ppid=2190 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:13.902000 audit[2320]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.902000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3cd6d110 a2=0 a3=7fff3cd6d0fc items=0 ppid=2190 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:13.904000 audit[2321]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.904000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe468648e0 a2=0 a3=7ffe468648cc items=0 ppid=2190 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:42:13.910000 audit[2323]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:13.910000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc32260e50 a2=0 a3=7ffc32260e3c items=0 ppid=2190 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:42:13.953000 audit[2329]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:13.953000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda1010c10 a2=0 a3=7ffda1010bfc items=0 ppid=2190 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:13.963000 audit[2329]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:13.963000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffda1010c10 a2=0 a3=7ffda1010bfc items=0 ppid=2190 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:13.975429 systemd[1]: run-containerd-runc-k8s.io-f3300a277fcef94db1eb6a9c03bae459b3660c807fab2866d671abd0c9eabc26-runc.XY2EUY.mount: Deactivated successfully. May 17 00:42:13.976000 audit[2334]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.976000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd51e1790 a2=0 a3=7ffdd51e177c items=0 ppid=2190 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:42:13.980000 audit[2336]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.980000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe32eaf720 a2=0 a3=7ffe32eaf70c items=0 ppid=2190 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:42:13.987000 audit[2339]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.987000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff54531050 a2=0 a3=7fff5453103c items=0 ppid=2190 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:42:13.990000 audit[2340]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.990000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9953e750 a2=0 a3=7ffd9953e73c items=0 ppid=2190 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:42:13.995000 audit[2342]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.995000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcedd54ef0 a2=0 a3=7ffcedd54edc items=0 ppid=2190 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:42:13.997000 audit[2343]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:13.997000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ad84520 a2=0 a3=7fff6ad8450c items=0 ppid=2190 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:42:14.001000 audit[2345]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.001000 audit[2345]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff00294550 a2=0 a3=7fff0029453c items=0 ppid=2190 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:42:14.006000 audit[2348]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.006000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd6dd867a0 a2=0 a3=7ffd6dd8678c items=0 ppid=2190 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:42:14.008000 audit[2349]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.008000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7aa83780 a2=0 a3=7ffd7aa8376c items=0 ppid=2190 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:42:14.012000 audit[2351]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.012000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe025e68b0 a2=0 a3=7ffe025e689c items=0 ppid=2190 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:42:14.014000 audit[2352]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.014000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe84a32db0 a2=0 a3=7ffe84a32d9c items=0 ppid=2190 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:42:14.018000 audit[2354]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.018000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7221c820 a2=0 a3=7ffc7221c80c items=0 ppid=2190 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:14.025000 audit[2357]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.025000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2db24970 a2=0 a3=7fff2db2495c items=0 ppid=2190 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:42:14.037000 audit[2360]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.037000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff99e1f4f0 a2=0 a3=7fff99e1f4dc items=0 ppid=2190 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:42:14.039000 audit[2361]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.039000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5bdea760 a2=0 a3=7ffd5bdea74c items=0 ppid=2190 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:42:14.044000 audit[2363]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.044000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe4b0158e0 a2=0 a3=7ffe4b0158cc items=0 ppid=2190 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:14.050000 audit[2366]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.050000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc77e50400 a2=0 a3=7ffc77e503ec items=0 ppid=2190 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:14.051000 audit[2367]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.051000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe433a9fd0 a2=0 a3=7ffe433a9fbc items=0 ppid=2190 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.051000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:42:14.055000 audit[2369]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.055000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffca8eccfa0 a2=0 a3=7ffca8eccf8c items=0 ppid=2190 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:42:14.058000 audit[2370]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.058000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc573bad0 a2=0 a3=7ffdc573babc items=0 ppid=2190 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:42:14.062000 audit[2372]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.062000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe2edd1e40 a2=0 a3=7ffe2edd1e2c items=0 ppid=2190 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:14.069000 audit[2375]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:14.069000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe9e36c590 a2=0 a3=7ffe9e36c57c items=0 ppid=2190 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:14.075000 audit[2377]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:42:14.075000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe6bbe7e20 a2=0 a3=7ffe6bbe7e0c items=0 ppid=2190 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.075000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:14.076000 audit[2377]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:42:14.076000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe6bbe7e20 a2=0 a3=7ffe6bbe7e0c items=0 ppid=2190 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.076000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:14.438029 kubelet[2089]: E0517 00:42:14.437984 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:14.456373 kubelet[2089]: I0517 00:42:14.453761 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ns7sd" podStartSLOduration=2.453740101 podStartE2EDuration="2.453740101s" podCreationTimestamp="2025-05-17 00:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:14.452624414 +0000 UTC m=+5.516439256" watchObservedRunningTime="2025-05-17 00:42:14.453740101 +0000 UTC m=+5.517554944" May 17 00:42:14.830181 kubelet[2089]: E0517 00:42:14.829993 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:15.440694 kubelet[2089]: E0517 00:42:15.440651 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:16.271423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321049606.mount: Deactivated successfully. May 17 00:42:17.307732 env[1298]: time="2025-05-17T00:42:17.307652944Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.309224 env[1298]: time="2025-05-17T00:42:17.309173432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.311071 env[1298]: time="2025-05-17T00:42:17.311026068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.313398 env[1298]: time="2025-05-17T00:42:17.313340213Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:42:17.315851 env[1298]: time="2025-05-17T00:42:17.315800359Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.320487 env[1298]: time="2025-05-17T00:42:17.320441018Z" level=info msg="CreateContainer within sandbox \"bea346a391fb662e98953367006418af6020a13de01cf117bcda96eacab47adc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:42:17.335256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703961894.mount: Deactivated successfully. May 17 00:42:17.342370 env[1298]: time="2025-05-17T00:42:17.342314415Z" level=info msg="CreateContainer within sandbox \"bea346a391fb662e98953367006418af6020a13de01cf117bcda96eacab47adc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"200506fb63b2d6f3b615024fbd85e028c72ffe9a0e813e811b235c99ded48e35\"" May 17 00:42:17.343686 env[1298]: time="2025-05-17T00:42:17.343635384Z" level=info msg="StartContainer for \"200506fb63b2d6f3b615024fbd85e028c72ffe9a0e813e811b235c99ded48e35\"" May 17 00:42:17.433573 env[1298]: time="2025-05-17T00:42:17.433501713Z" level=info msg="StartContainer for \"200506fb63b2d6f3b615024fbd85e028c72ffe9a0e813e811b235c99ded48e35\" returns successfully" May 17 00:42:18.327974 systemd[1]: run-containerd-runc-k8s.io-200506fb63b2d6f3b615024fbd85e028c72ffe9a0e813e811b235c99ded48e35-runc.ur4LBC.mount: Deactivated successfully. May 17 00:42:20.525738 kubelet[2089]: E0517 00:42:20.525707 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:20.577835 kubelet[2089]: I0517 00:42:20.577785 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-r5x8t" podStartSLOduration=3.909709454 podStartE2EDuration="7.5777534s" podCreationTimestamp="2025-05-17 00:42:13 +0000 UTC" firstStartedPulling="2025-05-17 00:42:13.649894849 +0000 UTC m=+4.713709669" lastFinishedPulling="2025-05-17 00:42:17.317938794 +0000 UTC m=+8.381753615" observedRunningTime="2025-05-17 00:42:17.460476877 +0000 UTC m=+8.524291719" watchObservedRunningTime="2025-05-17 00:42:20.5777534 +0000 UTC m=+11.641568241" May 17 00:42:22.869699 update_engine[1288]: I0517 00:42:22.869572 1288 update_attempter.cc:509] Updating boot flags... May 17 00:42:24.171000 audit[2460]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.174379 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:42:24.174504 kernel: audit: type=1325 audit(1747442544.171:289): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.171000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5a1171b0 a2=0 a3=7fff5a11719c items=0 ppid=2190 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.181878 kernel: audit: type=1300 audit(1747442544.171:289): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5a1171b0 a2=0 a3=7fff5a11719c items=0 ppid=2190 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.186341 kernel: audit: type=1327 audit(1747442544.171:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.187000 audit[2460]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.191354 kernel: audit: type=1325 audit(1747442544.187:290): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.187000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5a1171b0 a2=0 a3=0 items=0 ppid=2190 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.198333 kernel: audit: type=1300 audit(1747442544.187:290): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5a1171b0 a2=0 a3=0 items=0 ppid=2190 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.207334 kernel: audit: type=1327 audit(1747442544.187:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.224000 audit[2462]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.228328 kernel: audit: type=1325 audit(1747442544.224:291): table=filter:91 family=2 entries=15 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.224000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd3a1c440 a2=0 a3=7ffdd3a1c42c items=0 ppid=2190 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.234334 kernel: audit: type=1300 audit(1747442544.224:291): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd3a1c440 a2=0 a3=7ffdd3a1c42c items=0 ppid=2190 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.239431 kernel: audit: type=1327 audit(1747442544.224:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.245000 audit[2462]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.250338 kernel: audit: type=1325 audit(1747442544.245:292): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:24.245000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd3a1c440 a2=0 a3=0 items=0 ppid=2190 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:24.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:24.718000 audit[1467]: USER_END pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:24.718000 audit[1467]: CRED_DISP pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:24.719697 sudo[1467]: pam_unix(sudo:session): session closed for user root May 17 00:42:24.725375 sshd[1461]: pam_unix(sshd:session): session closed for user core May 17 00:42:24.726000 audit[1461]: USER_END pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:42:24.727000 audit[1461]: CRED_DISP pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:42:24.730955 systemd[1]: sshd@6-137.184.190.96:22-147.75.109.163:55848.service: Deactivated successfully. May 17 00:42:24.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-137.184.190.96:22-147.75.109.163:55848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.732527 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:42:24.733242 systemd-logind[1287]: Session 7 logged out. Waiting for processes to exit. May 17 00:42:24.734690 systemd-logind[1287]: Removed session 7. May 17 00:42:27.417000 audit[2483]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:27.417000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcafba7240 a2=0 a3=7ffcafba722c items=0 ppid=2190 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:27.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:27.421000 audit[2483]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:27.421000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcafba7240 a2=0 a3=0 items=0 ppid=2190 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:27.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:27.520000 audit[2485]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:27.520000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea79cb5e0 a2=0 a3=7ffea79cb5cc items=0 ppid=2190 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:27.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:27.568374 kubelet[2089]: I0517 00:42:27.568319 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8e253c4-0fdf-4351-b98b-719eb1d736a1-tigera-ca-bundle\") pod \"calico-typha-5d479c4fbb-fpk2m\" (UID: \"c8e253c4-0fdf-4351-b98b-719eb1d736a1\") " pod="calico-system/calico-typha-5d479c4fbb-fpk2m" May 17 00:42:27.569075 kubelet[2089]: I0517 00:42:27.569046 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7r8\" (UniqueName: \"kubernetes.io/projected/c8e253c4-0fdf-4351-b98b-719eb1d736a1-kube-api-access-hw7r8\") pod \"calico-typha-5d479c4fbb-fpk2m\" (UID: \"c8e253c4-0fdf-4351-b98b-719eb1d736a1\") " pod="calico-system/calico-typha-5d479c4fbb-fpk2m" May 17 00:42:27.569264 kubelet[2089]: I0517 00:42:27.569239 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8e253c4-0fdf-4351-b98b-719eb1d736a1-typha-certs\") pod \"calico-typha-5d479c4fbb-fpk2m\" (UID: \"c8e253c4-0fdf-4351-b98b-719eb1d736a1\") " pod="calico-system/calico-typha-5d479c4fbb-fpk2m" May 17 00:42:27.609000 audit[2485]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:27.609000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea79cb5e0 a2=0 a3=0 items=0 ppid=2190 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:27.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:27.871398 kubelet[2089]: I0517 00:42:27.871233 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-cni-bin-dir\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.871633 kubelet[2089]: I0517 00:42:27.871614 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-policysync\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.871758 kubelet[2089]: I0517 00:42:27.871744 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-var-lib-calico\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.871856 kubelet[2089]: I0517 00:42:27.871842 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-tigera-ca-bundle\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.871952 kubelet[2089]: I0517 00:42:27.871936 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-xtables-lock\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.872056 kubelet[2089]: I0517 00:42:27.872042 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-var-run-calico\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.872142 kubelet[2089]: E0517 00:42:27.872116 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:27.872224 kubelet[2089]: I0517 00:42:27.872134 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-cni-net-dir\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.872477 kubelet[2089]: I0517 00:42:27.872455 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-lib-modules\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.872618 kubelet[2089]: I0517 00:42:27.872601 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-flexvol-driver-host\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.872896 kubelet[2089]: I0517 00:42:27.872709 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-cni-log-dir\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.873076 kubelet[2089]: I0517 00:42:27.873041 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mq4\" (UniqueName: \"kubernetes.io/projected/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-kube-api-access-92mq4\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.875809 kubelet[2089]: I0517 00:42:27.875753 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6-node-certs\") pod \"calico-node-cjrmp\" (UID: \"1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6\") " pod="calico-system/calico-node-cjrmp" May 17 00:42:27.876981 env[1298]: time="2025-05-17T00:42:27.876391427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d479c4fbb-fpk2m,Uid:c8e253c4-0fdf-4351-b98b-719eb1d736a1,Namespace:calico-system,Attempt:0,}" May 17 00:42:27.911862 env[1298]: time="2025-05-17T00:42:27.906824400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:27.911862 env[1298]: time="2025-05-17T00:42:27.906963826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:27.911862 env[1298]: time="2025-05-17T00:42:27.906989688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:27.911862 env[1298]: time="2025-05-17T00:42:27.907210816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/654768cb812657f9341ffdff2de1ac0d6c5d0f2e5d7d5b6956165fb300b69222 pid=2494 runtime=io.containerd.runc.v2 May 17 00:42:27.986665 kubelet[2089]: E0517 00:42:27.986626 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:27.986901 kubelet[2089]: W0517 00:42:27.986876 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:27.986996 kubelet[2089]: E0517 00:42:27.986980 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.001503 kubelet[2089]: E0517 00:42:28.001464 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.001705 kubelet[2089]: W0517 00:42:28.001680 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.001787 kubelet[2089]: E0517 00:42:28.001771 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.016559 kubelet[2089]: E0517 00:42:28.016493 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:28.068170 kubelet[2089]: E0517 00:42:28.068132 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.068426 kubelet[2089]: W0517 00:42:28.068400 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.068546 kubelet[2089]: E0517 00:42:28.068525 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.069170 kubelet[2089]: E0517 00:42:28.069134 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.069381 kubelet[2089]: W0517 00:42:28.069354 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.069527 kubelet[2089]: E0517 00:42:28.069506 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.071415 kubelet[2089]: E0517 00:42:28.071364 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.071610 kubelet[2089]: W0517 00:42:28.071581 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.071737 kubelet[2089]: E0517 00:42:28.071714 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.072210 kubelet[2089]: E0517 00:42:28.072189 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.072420 kubelet[2089]: W0517 00:42:28.072398 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.072504 kubelet[2089]: E0517 00:42:28.072490 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.072809 kubelet[2089]: E0517 00:42:28.072796 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.072951 kubelet[2089]: W0517 00:42:28.072934 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.073034 kubelet[2089]: E0517 00:42:28.073021 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.073365 kubelet[2089]: E0517 00:42:28.073349 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.073493 kubelet[2089]: W0517 00:42:28.073474 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.073593 kubelet[2089]: E0517 00:42:28.073575 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.074894 kubelet[2089]: E0517 00:42:28.074874 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.075037 kubelet[2089]: W0517 00:42:28.075021 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.075129 kubelet[2089]: E0517 00:42:28.075111 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080127 kubelet[2089]: E0517 00:42:28.077414 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.080127 kubelet[2089]: W0517 00:42:28.077442 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.080127 kubelet[2089]: E0517 00:42:28.077479 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080127 kubelet[2089]: E0517 00:42:28.077896 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.080127 kubelet[2089]: W0517 00:42:28.077912 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.080127 kubelet[2089]: E0517 00:42:28.077948 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080127 kubelet[2089]: I0517 00:42:28.077974 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb-kubelet-dir\") pod \"csi-node-driver-xrqvm\" (UID: \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\") " pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:28.080127 kubelet[2089]: E0517 00:42:28.078203 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.080127 kubelet[2089]: W0517 00:42:28.078212 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078226 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080724 kubelet[2089]: I0517 00:42:28.078241 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb-registration-dir\") pod \"csi-node-driver-xrqvm\" (UID: \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\") " pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078466 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.080724 kubelet[2089]: W0517 00:42:28.078477 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078490 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078649 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.080724 kubelet[2089]: W0517 00:42:28.078656 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078668 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.080724 kubelet[2089]: E0517 00:42:28.078830 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081068 kubelet[2089]: W0517 00:42:28.078839 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.078850 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.079097 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081068 kubelet[2089]: W0517 00:42:28.079107 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.079121 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.079336 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081068 kubelet[2089]: W0517 00:42:28.079344 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.079440 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081068 kubelet[2089]: E0517 00:42:28.079594 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081068 kubelet[2089]: W0517 00:42:28.079606 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.079622 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.079792 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081473 kubelet[2089]: W0517 00:42:28.079801 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.079812 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.080019 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081473 kubelet[2089]: W0517 00:42:28.080030 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.080043 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.080845 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081473 kubelet[2089]: W0517 00:42:28.080855 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081473 kubelet[2089]: E0517 00:42:28.080868 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081027 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081742 kubelet[2089]: W0517 00:42:28.081035 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081043 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081183 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081742 kubelet[2089]: W0517 00:42:28.081192 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081204 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081521 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.081742 kubelet[2089]: W0517 00:42:28.081534 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081549 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.081742 kubelet[2089]: E0517 00:42:28.081726 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.082018 kubelet[2089]: W0517 00:42:28.081734 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.082018 kubelet[2089]: E0517 00:42:28.081742 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.082018 kubelet[2089]: E0517 00:42:28.081916 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.082018 kubelet[2089]: W0517 00:42:28.081927 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.082018 kubelet[2089]: E0517 00:42:28.081941 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.082145 kubelet[2089]: E0517 00:42:28.082118 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.082145 kubelet[2089]: W0517 00:42:28.082128 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.082145 kubelet[2089]: E0517 00:42:28.082140 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.083015 kubelet[2089]: E0517 00:42:28.082357 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.083015 kubelet[2089]: W0517 00:42:28.082373 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.083015 kubelet[2089]: E0517 00:42:28.082386 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.083243 env[1298]: time="2025-05-17T00:42:28.082591800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjrmp,Uid:1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6,Namespace:calico-system,Attempt:0,}" May 17 00:42:28.127097 env[1298]: time="2025-05-17T00:42:28.126883991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:28.127097 env[1298]: time="2025-05-17T00:42:28.126957471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:28.127097 env[1298]: time="2025-05-17T00:42:28.126968944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:28.132086 env[1298]: time="2025-05-17T00:42:28.130918003Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c pid=2571 runtime=io.containerd.runc.v2 May 17 00:42:28.150025 env[1298]: time="2025-05-17T00:42:28.148251839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d479c4fbb-fpk2m,Uid:c8e253c4-0fdf-4351-b98b-719eb1d736a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"654768cb812657f9341ffdff2de1ac0d6c5d0f2e5d7d5b6956165fb300b69222\"" May 17 00:42:28.164016 kubelet[2089]: E0517 00:42:28.160959 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:28.172626 env[1298]: time="2025-05-17T00:42:28.172581409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:42:28.178918 kubelet[2089]: E0517 00:42:28.178872 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.178918 kubelet[2089]: W0517 00:42:28.178907 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.178918 kubelet[2089]: E0517 00:42:28.178936 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.179281 kubelet[2089]: E0517 00:42:28.179259 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.179281 kubelet[2089]: W0517 00:42:28.179274 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.179451 kubelet[2089]: E0517 00:42:28.179304 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.179451 kubelet[2089]: I0517 00:42:28.179333 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb-varrun\") pod \"csi-node-driver-xrqvm\" (UID: \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\") " pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:28.179748 kubelet[2089]: E0517 00:42:28.179648 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.179748 kubelet[2089]: W0517 00:42:28.179671 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.179748 kubelet[2089]: E0517 00:42:28.179691 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.179748 kubelet[2089]: I0517 00:42:28.179715 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb-socket-dir\") pod \"csi-node-driver-xrqvm\" (UID: \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\") " pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:28.180041 kubelet[2089]: E0517 00:42:28.180025 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.180041 kubelet[2089]: W0517 00:42:28.180040 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.180130 kubelet[2089]: E0517 00:42:28.180060 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.180366 kubelet[2089]: E0517 00:42:28.180345 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.180366 kubelet[2089]: W0517 00:42:28.180363 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.180526 kubelet[2089]: E0517 00:42:28.180476 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.180571 kubelet[2089]: I0517 00:42:28.180551 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhft\" (UniqueName: \"kubernetes.io/projected/cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb-kube-api-access-bvhft\") pod \"csi-node-driver-xrqvm\" (UID: \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\") " pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.181748 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183344 kubelet[2089]: W0517 00:42:28.181805 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.181878 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.182362 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183344 kubelet[2089]: W0517 00:42:28.182375 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.182527 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.182898 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183344 kubelet[2089]: W0517 00:42:28.182908 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.183002 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.183344 kubelet[2089]: E0517 00:42:28.183111 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183749 kubelet[2089]: W0517 00:42:28.183118 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183749 kubelet[2089]: E0517 00:42:28.183191 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.183749 kubelet[2089]: E0517 00:42:28.183519 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183749 kubelet[2089]: W0517 00:42:28.183533 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183749 kubelet[2089]: E0517 00:42:28.183636 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.183894 kubelet[2089]: E0517 00:42:28.183763 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.183894 kubelet[2089]: W0517 00:42:28.183778 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.183894 kubelet[2089]: E0517 00:42:28.183850 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.184159 kubelet[2089]: E0517 00:42:28.184137 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.184159 kubelet[2089]: W0517 00:42:28.184153 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.184321 kubelet[2089]: E0517 00:42:28.184173 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.184613 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.192403 kubelet[2089]: W0517 00:42:28.184632 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.184653 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.185060 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.192403 kubelet[2089]: W0517 00:42:28.185074 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.185094 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.185332 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.192403 kubelet[2089]: W0517 00:42:28.185344 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.185357 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.192403 kubelet[2089]: E0517 00:42:28.185558 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.193069 kubelet[2089]: W0517 00:42:28.185566 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.185577 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.185802 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.193069 kubelet[2089]: W0517 00:42:28.185809 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.185897 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.186036 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.193069 kubelet[2089]: W0517 00:42:28.186043 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.186055 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.193069 kubelet[2089]: E0517 00:42:28.186233 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.193069 kubelet[2089]: W0517 00:42:28.186241 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.193538 kubelet[2089]: E0517 00:42:28.186253 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.261276 env[1298]: time="2025-05-17T00:42:28.261203442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjrmp,Uid:1d0cefae-6c10-4ab7-9d8a-fbb28c9c0cc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\"" May 17 00:42:28.282317 kubelet[2089]: E0517 00:42:28.282081 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.282317 kubelet[2089]: W0517 00:42:28.282109 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.282317 kubelet[2089]: E0517 00:42:28.282134 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.282931 kubelet[2089]: E0517 00:42:28.282645 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.282931 kubelet[2089]: W0517 00:42:28.282681 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.282931 kubelet[2089]: E0517 00:42:28.282721 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.283184 kubelet[2089]: E0517 00:42:28.283166 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.283489 kubelet[2089]: W0517 00:42:28.283284 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.283489 kubelet[2089]: E0517 00:42:28.283342 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.283727 kubelet[2089]: E0517 00:42:28.283708 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.283860 kubelet[2089]: W0517 00:42:28.283841 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.283955 kubelet[2089]: E0517 00:42:28.283940 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.284415 kubelet[2089]: E0517 00:42:28.284398 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.284544 kubelet[2089]: W0517 00:42:28.284529 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.284634 kubelet[2089]: E0517 00:42:28.284620 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.285110 kubelet[2089]: E0517 00:42:28.285094 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.285227 kubelet[2089]: W0517 00:42:28.285206 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.285388 kubelet[2089]: E0517 00:42:28.285368 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.285948 kubelet[2089]: E0517 00:42:28.285902 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.286077 kubelet[2089]: W0517 00:42:28.286062 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.286240 kubelet[2089]: E0517 00:42:28.286227 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.286523 kubelet[2089]: E0517 00:42:28.286510 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.286645 kubelet[2089]: W0517 00:42:28.286626 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.289516 kubelet[2089]: E0517 00:42:28.289428 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.291565 kubelet[2089]: E0517 00:42:28.291509 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.291865 kubelet[2089]: W0517 00:42:28.291830 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.292027 kubelet[2089]: E0517 00:42:28.292006 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.298607 kubelet[2089]: E0517 00:42:28.298550 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.298819 kubelet[2089]: W0517 00:42:28.298794 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.298945 kubelet[2089]: E0517 00:42:28.298924 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.300515 kubelet[2089]: E0517 00:42:28.300477 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.300765 kubelet[2089]: W0517 00:42:28.300738 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.300909 kubelet[2089]: E0517 00:42:28.300890 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.302467 kubelet[2089]: E0517 00:42:28.302441 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.302657 kubelet[2089]: W0517 00:42:28.302634 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.302777 kubelet[2089]: E0517 00:42:28.302759 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.304483 kubelet[2089]: E0517 00:42:28.304457 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.304673 kubelet[2089]: W0517 00:42:28.304654 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.304782 kubelet[2089]: E0517 00:42:28.304769 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.306642 kubelet[2089]: E0517 00:42:28.306614 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.306807 kubelet[2089]: W0517 00:42:28.306788 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.306907 kubelet[2089]: E0517 00:42:28.306893 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.310341 kubelet[2089]: E0517 00:42:28.308514 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.310341 kubelet[2089]: W0517 00:42:28.308544 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.310341 kubelet[2089]: E0517 00:42:28.308595 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.324496 kubelet[2089]: E0517 00:42:28.324446 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:28.324786 kubelet[2089]: W0517 00:42:28.324749 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:28.324930 kubelet[2089]: E0517 00:42:28.324902 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:28.650000 audit[2650]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:28.650000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc46376930 a2=0 a3=7ffc4637691c items=0 ppid=2190 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:28.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:28.656000 audit[2650]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:28.656000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc46376930 a2=0 a3=0 items=0 ppid=2190 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:28.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:29.353273 kubelet[2089]: E0517 00:42:29.353177 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:29.929474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619052960.mount: Deactivated successfully. May 17 00:42:31.084444 env[1298]: time="2025-05-17T00:42:31.084357269Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:31.086924 env[1298]: time="2025-05-17T00:42:31.086823673Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:31.089640 env[1298]: time="2025-05-17T00:42:31.089582854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:31.094077 env[1298]: time="2025-05-17T00:42:31.091815979Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:31.094077 env[1298]: time="2025-05-17T00:42:31.092602113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:42:31.099451 env[1298]: time="2025-05-17T00:42:31.096594851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:42:31.119554 env[1298]: time="2025-05-17T00:42:31.119475003Z" level=info msg="CreateContainer within sandbox \"654768cb812657f9341ffdff2de1ac0d6c5d0f2e5d7d5b6956165fb300b69222\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:42:31.134311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846100497.mount: Deactivated successfully. May 17 00:42:31.138803 env[1298]: time="2025-05-17T00:42:31.138751385Z" level=info msg="CreateContainer within sandbox \"654768cb812657f9341ffdff2de1ac0d6c5d0f2e5d7d5b6956165fb300b69222\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f3ecb3de522b0aa4950c70a97acaa82777a83fba2839f14a5138df1b5d8f7a73\"" May 17 00:42:31.141645 env[1298]: time="2025-05-17T00:42:31.139936460Z" level=info msg="StartContainer for \"f3ecb3de522b0aa4950c70a97acaa82777a83fba2839f14a5138df1b5d8f7a73\"" May 17 00:42:31.269807 env[1298]: time="2025-05-17T00:42:31.269743416Z" level=info msg="StartContainer for \"f3ecb3de522b0aa4950c70a97acaa82777a83fba2839f14a5138df1b5d8f7a73\" returns successfully" May 17 00:42:31.353012 kubelet[2089]: E0517 00:42:31.352839 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:31.494935 kubelet[2089]: E0517 00:42:31.494868 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:31.522378 kubelet[2089]: E0517 00:42:31.522331 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.522378 kubelet[2089]: W0517 00:42:31.522368 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.522762 kubelet[2089]: E0517 00:42:31.522413 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.522900 kubelet[2089]: E0517 00:42:31.522837 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.522900 kubelet[2089]: W0517 00:42:31.522866 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.522900 kubelet[2089]: E0517 00:42:31.522884 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.523141 kubelet[2089]: E0517 00:42:31.523132 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.523197 kubelet[2089]: W0517 00:42:31.523143 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.523197 kubelet[2089]: E0517 00:42:31.523157 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.523376 kubelet[2089]: E0517 00:42:31.523357 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.523376 kubelet[2089]: W0517 00:42:31.523374 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.523519 kubelet[2089]: E0517 00:42:31.523383 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.523672 kubelet[2089]: E0517 00:42:31.523643 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.523672 kubelet[2089]: W0517 00:42:31.523667 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.523797 kubelet[2089]: E0517 00:42:31.523682 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.523890 kubelet[2089]: E0517 00:42:31.523873 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.523890 kubelet[2089]: W0517 00:42:31.523885 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.524011 kubelet[2089]: E0517 00:42:31.523894 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.524193 kubelet[2089]: E0517 00:42:31.524173 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.524193 kubelet[2089]: W0517 00:42:31.524189 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.524345 kubelet[2089]: E0517 00:42:31.524200 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.524428 kubelet[2089]: E0517 00:42:31.524413 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.524490 kubelet[2089]: W0517 00:42:31.524426 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.524490 kubelet[2089]: E0517 00:42:31.524442 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.524669 kubelet[2089]: E0517 00:42:31.524649 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.524669 kubelet[2089]: W0517 00:42:31.524664 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.524785 kubelet[2089]: E0517 00:42:31.524679 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.524941 kubelet[2089]: E0517 00:42:31.524919 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.524941 kubelet[2089]: W0517 00:42:31.524937 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.525033 kubelet[2089]: E0517 00:42:31.524951 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.525154 kubelet[2089]: E0517 00:42:31.525139 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.525154 kubelet[2089]: W0517 00:42:31.525150 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.525257 kubelet[2089]: E0517 00:42:31.525159 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.533643 kubelet[2089]: E0517 00:42:31.533599 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.533643 kubelet[2089]: W0517 00:42:31.533633 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.533879 kubelet[2089]: E0517 00:42:31.533678 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.534107 kubelet[2089]: E0517 00:42:31.534085 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.534107 kubelet[2089]: W0517 00:42:31.534102 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.534223 kubelet[2089]: E0517 00:42:31.534117 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.535473 kubelet[2089]: E0517 00:42:31.535441 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.535473 kubelet[2089]: W0517 00:42:31.535463 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.535473 kubelet[2089]: E0517 00:42:31.535480 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.535722 kubelet[2089]: E0517 00:42:31.535707 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.535722 kubelet[2089]: W0517 00:42:31.535719 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.535795 kubelet[2089]: E0517 00:42:31.535742 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.540692 kubelet[2089]: E0517 00:42:31.540652 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.540692 kubelet[2089]: W0517 00:42:31.540685 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.540888 kubelet[2089]: E0517 00:42:31.540715 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.541161 kubelet[2089]: E0517 00:42:31.541140 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.541161 kubelet[2089]: W0517 00:42:31.541157 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.541232 kubelet[2089]: E0517 00:42:31.541177 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.541567 kubelet[2089]: E0517 00:42:31.541544 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.541567 kubelet[2089]: W0517 00:42:31.541563 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.541671 kubelet[2089]: E0517 00:42:31.541587 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.541888 kubelet[2089]: E0517 00:42:31.541870 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.541888 kubelet[2089]: W0517 00:42:31.541886 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.541968 kubelet[2089]: E0517 00:42:31.541904 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.544725 kubelet[2089]: E0517 00:42:31.544693 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.544725 kubelet[2089]: W0517 00:42:31.544716 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.544907 kubelet[2089]: E0517 00:42:31.544840 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.544996 kubelet[2089]: E0517 00:42:31.544980 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.545061 kubelet[2089]: W0517 00:42:31.544995 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.545093 kubelet[2089]: E0517 00:42:31.545076 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.545230 kubelet[2089]: E0517 00:42:31.545203 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.545230 kubelet[2089]: W0517 00:42:31.545218 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.547390 kubelet[2089]: E0517 00:42:31.547365 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.547577 kubelet[2089]: E0517 00:42:31.547561 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.547630 kubelet[2089]: W0517 00:42:31.547576 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.547630 kubelet[2089]: E0517 00:42:31.547593 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.554113 kubelet[2089]: E0517 00:42:31.554080 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.554113 kubelet[2089]: W0517 00:42:31.554103 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.554350 kubelet[2089]: E0517 00:42:31.554226 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.559637 kubelet[2089]: E0517 00:42:31.559584 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.559637 kubelet[2089]: W0517 00:42:31.559613 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.559875 kubelet[2089]: E0517 00:42:31.559744 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.559929 kubelet[2089]: E0517 00:42:31.559879 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.559929 kubelet[2089]: W0517 00:42:31.559886 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.560029 kubelet[2089]: E0517 00:42:31.559957 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.560125 kubelet[2089]: E0517 00:42:31.560108 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.560125 kubelet[2089]: W0517 00:42:31.560123 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.560211 kubelet[2089]: E0517 00:42:31.560202 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.560353 kubelet[2089]: E0517 00:42:31.560337 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.560353 kubelet[2089]: W0517 00:42:31.560347 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.560353 kubelet[2089]: E0517 00:42:31.560359 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.563496 kubelet[2089]: E0517 00:42:31.563453 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.563496 kubelet[2089]: W0517 00:42:31.563486 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.563706 kubelet[2089]: E0517 00:42:31.563531 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.565655 kubelet[2089]: E0517 00:42:31.565480 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.565655 kubelet[2089]: W0517 00:42:31.565508 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.567409 kubelet[2089]: E0517 00:42:31.567359 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.567683 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.568656 kubelet[2089]: W0517 00:42:31.567702 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.567724 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.568103 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.568656 kubelet[2089]: W0517 00:42:31.568118 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.568136 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.568586 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:31.568656 kubelet[2089]: W0517 00:42:31.568596 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:31.568656 kubelet[2089]: E0517 00:42:31.568616 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.497754 kubelet[2089]: E0517 00:42:32.497367 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:32.524750 kubelet[2089]: I0517 00:42:32.524497 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d479c4fbb-fpk2m" podStartSLOduration=2.595686223 podStartE2EDuration="5.524473359s" podCreationTimestamp="2025-05-17 00:42:27 +0000 UTC" firstStartedPulling="2025-05-17 00:42:28.16674353 +0000 UTC m=+19.230558350" lastFinishedPulling="2025-05-17 00:42:31.095530663 +0000 UTC m=+22.159345486" observedRunningTime="2025-05-17 00:42:31.530527493 +0000 UTC m=+22.594342335" watchObservedRunningTime="2025-05-17 00:42:32.524473359 +0000 UTC m=+23.588288200" May 17 00:42:32.560107 kubelet[2089]: E0517 00:42:32.560072 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.560340 kubelet[2089]: W0517 00:42:32.560315 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.560466 kubelet[2089]: E0517 00:42:32.560446 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.560842 kubelet[2089]: E0517 00:42:32.560825 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.560953 kubelet[2089]: W0517 00:42:32.560937 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.561048 kubelet[2089]: E0517 00:42:32.561035 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.561316 kubelet[2089]: E0517 00:42:32.561304 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.561432 kubelet[2089]: W0517 00:42:32.561416 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.561551 kubelet[2089]: E0517 00:42:32.561509 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.561882 kubelet[2089]: E0517 00:42:32.561863 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.562005 kubelet[2089]: W0517 00:42:32.561987 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.562093 kubelet[2089]: E0517 00:42:32.562077 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.564237 kubelet[2089]: E0517 00:42:32.564212 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.564444 kubelet[2089]: W0517 00:42:32.564425 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.564546 kubelet[2089]: E0517 00:42:32.564529 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.564882 kubelet[2089]: E0517 00:42:32.564866 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.564988 kubelet[2089]: W0517 00:42:32.564972 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.565074 kubelet[2089]: E0517 00:42:32.565058 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.565495 kubelet[2089]: E0517 00:42:32.565479 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.565622 kubelet[2089]: W0517 00:42:32.565606 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.565718 kubelet[2089]: E0517 00:42:32.565699 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.565966 kubelet[2089]: E0517 00:42:32.565954 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.566040 kubelet[2089]: W0517 00:42:32.566027 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.566106 kubelet[2089]: E0517 00:42:32.566094 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.568547 kubelet[2089]: E0517 00:42:32.568523 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.568743 kubelet[2089]: W0517 00:42:32.568718 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.568849 kubelet[2089]: E0517 00:42:32.568834 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.569119 kubelet[2089]: E0517 00:42:32.569105 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.569221 kubelet[2089]: W0517 00:42:32.569207 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.569364 kubelet[2089]: E0517 00:42:32.569349 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.569668 kubelet[2089]: E0517 00:42:32.569652 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.569783 kubelet[2089]: W0517 00:42:32.569765 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.569875 kubelet[2089]: E0517 00:42:32.569862 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.570184 kubelet[2089]: E0517 00:42:32.570163 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.570385 kubelet[2089]: W0517 00:42:32.570361 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.570518 kubelet[2089]: E0517 00:42:32.570502 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.570861 kubelet[2089]: E0517 00:42:32.570845 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.571004 kubelet[2089]: W0517 00:42:32.570988 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.571152 kubelet[2089]: E0517 00:42:32.571134 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.571535 kubelet[2089]: E0517 00:42:32.571520 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.571644 kubelet[2089]: W0517 00:42:32.571630 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.571728 kubelet[2089]: E0517 00:42:32.571715 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.572005 kubelet[2089]: E0517 00:42:32.571992 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.572126 kubelet[2089]: W0517 00:42:32.572109 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.572222 kubelet[2089]: E0517 00:42:32.572205 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664174 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.664812 kubelet[2089]: W0517 00:42:32.664196 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664219 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664465 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.664812 kubelet[2089]: W0517 00:42:32.664473 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664493 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664710 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.664812 kubelet[2089]: W0517 00:42:32.664723 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.664812 kubelet[2089]: E0517 00:42:32.664736 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.665533 kubelet[2089]: E0517 00:42:32.665508 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.665533 kubelet[2089]: W0517 00:42:32.665529 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.665659 kubelet[2089]: E0517 00:42:32.665551 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.666325 kubelet[2089]: E0517 00:42:32.666288 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.666325 kubelet[2089]: W0517 00:42:32.666322 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.666623 kubelet[2089]: E0517 00:42:32.666481 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.666763 kubelet[2089]: E0517 00:42:32.666745 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.666763 kubelet[2089]: W0517 00:42:32.666763 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.667003 kubelet[2089]: E0517 00:42:32.666863 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.667802 kubelet[2089]: E0517 00:42:32.667780 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.667802 kubelet[2089]: W0517 00:42:32.667798 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.668272 kubelet[2089]: E0517 00:42:32.668032 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.669688 kubelet[2089]: E0517 00:42:32.669666 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.669688 kubelet[2089]: W0517 00:42:32.669684 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.670011 kubelet[2089]: E0517 00:42:32.669861 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.670122 kubelet[2089]: E0517 00:42:32.670045 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.670122 kubelet[2089]: W0517 00:42:32.670056 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.670456 kubelet[2089]: E0517 00:42:32.670201 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.671365 kubelet[2089]: E0517 00:42:32.671346 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.671365 kubelet[2089]: W0517 00:42:32.671362 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.671624 kubelet[2089]: E0517 00:42:32.671493 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.672263 kubelet[2089]: E0517 00:42:32.672244 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.672810 kubelet[2089]: W0517 00:42:32.672263 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.672810 kubelet[2089]: E0517 00:42:32.672324 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.673258 kubelet[2089]: E0517 00:42:32.673234 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.673258 kubelet[2089]: W0517 00:42:32.673253 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.673602 kubelet[2089]: E0517 00:42:32.673455 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.674165 kubelet[2089]: E0517 00:42:32.674139 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.674165 kubelet[2089]: W0517 00:42:32.674164 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.674488 kubelet[2089]: E0517 00:42:32.674323 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.674951 kubelet[2089]: E0517 00:42:32.674934 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.674951 kubelet[2089]: W0517 00:42:32.674949 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.675207 kubelet[2089]: E0517 00:42:32.675052 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.676823 kubelet[2089]: E0517 00:42:32.676800 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.676823 kubelet[2089]: W0517 00:42:32.676818 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.677312 kubelet[2089]: E0517 00:42:32.676986 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.678153 kubelet[2089]: E0517 00:42:32.678134 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.678153 kubelet[2089]: W0517 00:42:32.678149 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.678273 kubelet[2089]: E0517 00:42:32.678170 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.678799 kubelet[2089]: E0517 00:42:32.678546 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.678799 kubelet[2089]: W0517 00:42:32.678559 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.678799 kubelet[2089]: E0517 00:42:32.678581 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.678799 kubelet[2089]: E0517 00:42:32.678754 2089 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:42:32.678799 kubelet[2089]: W0517 00:42:32.678761 2089 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:42:32.678799 kubelet[2089]: E0517 00:42:32.678773 2089 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:42:32.687505 env[1298]: time="2025-05-17T00:42:32.687455414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:32.689983 env[1298]: time="2025-05-17T00:42:32.689939828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:32.692371 env[1298]: time="2025-05-17T00:42:32.692273607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:32.694904 env[1298]: time="2025-05-17T00:42:32.694863588Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:32.695808 env[1298]: time="2025-05-17T00:42:32.695752084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:42:32.699041 env[1298]: time="2025-05-17T00:42:32.698980955Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:42:32.713869 kernel: kauditd_printk_skb: 25 callbacks suppressed May 17 00:42:32.714044 kernel: audit: type=1325 audit(1747442552.704:304): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:32.704000 audit[2766]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:32.704000 audit[2766]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff0d79ee00 a2=0 a3=7fff0d79edec items=0 ppid=2190 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.718530 kernel: audit: type=1300 audit(1747442552.704:304): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff0d79ee00 a2=0 a3=7fff0d79edec items=0 ppid=2190 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:32.724589 kernel: audit: type=1327 audit(1747442552.704:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:32.728685 env[1298]: time="2025-05-17T00:42:32.728626709Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03\"" May 17 00:42:32.730218 env[1298]: time="2025-05-17T00:42:32.730161211Z" level=info msg="StartContainer for \"e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03\"" May 17 00:42:32.732000 audit[2766]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:32.736419 kernel: audit: type=1325 audit(1747442552.732:305): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:32.732000 audit[2766]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff0d79ee00 a2=0 a3=7fff0d79edec items=0 ppid=2190 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.743447 kernel: audit: type=1300 audit(1747442552.732:305): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff0d79ee00 a2=0 a3=7fff0d79edec items=0 ppid=2190 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:32.752612 kernel: audit: type=1327 audit(1747442552.732:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:32.848288 env[1298]: time="2025-05-17T00:42:32.848237322Z" level=info msg="StartContainer for \"e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03\" returns successfully" May 17 00:42:32.900436 env[1298]: time="2025-05-17T00:42:32.900386960Z" level=info msg="shim disconnected" id=e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03 May 17 00:42:32.900882 env[1298]: time="2025-05-17T00:42:32.900836276Z" level=warning msg="cleaning up after shim disconnected" id=e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03 namespace=k8s.io May 17 00:42:32.901087 env[1298]: time="2025-05-17T00:42:32.901040368Z" level=info msg="cleaning up dead shim" May 17 00:42:32.912169 env[1298]: time="2025-05-17T00:42:32.912123517Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2817 runtime=io.containerd.runc.v2\n" May 17 00:42:33.112430 systemd[1]: run-containerd-runc-k8s.io-e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03-runc.8FYQlQ.mount: Deactivated successfully. May 17 00:42:33.112597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b243cca10c64aa47d177b2a14947230a5cd1393385b4567423b89684641b03-rootfs.mount: Deactivated successfully. May 17 00:42:33.351509 kubelet[2089]: E0517 00:42:33.351445 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:33.501252 kubelet[2089]: E0517 00:42:33.501211 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:33.504964 env[1298]: time="2025-05-17T00:42:33.504906925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:42:34.502938 kubelet[2089]: E0517 00:42:34.502900 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:35.352907 kubelet[2089]: E0517 00:42:35.352855 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:36.882957 env[1298]: time="2025-05-17T00:42:36.882901308Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:36.884916 env[1298]: time="2025-05-17T00:42:36.884865956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:36.886396 env[1298]: time="2025-05-17T00:42:36.886356605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:36.887845 env[1298]: time="2025-05-17T00:42:36.887807370Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:36.888412 env[1298]: time="2025-05-17T00:42:36.888383545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:42:36.893511 env[1298]: time="2025-05-17T00:42:36.893471525Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:42:36.909030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893963809.mount: Deactivated successfully. May 17 00:42:36.913048 env[1298]: time="2025-05-17T00:42:36.912979618Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a\"" May 17 00:42:36.916660 env[1298]: time="2025-05-17T00:42:36.916613370Z" level=info msg="StartContainer for \"72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a\"" May 17 00:42:36.996261 env[1298]: time="2025-05-17T00:42:36.996153516Z" level=info msg="StartContainer for \"72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a\" returns successfully" May 17 00:42:37.351911 kubelet[2089]: E0517 00:42:37.351550 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:37.639869 env[1298]: time="2025-05-17T00:42:37.639804887Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:42:37.666675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a-rootfs.mount: Deactivated successfully. May 17 00:42:37.671475 env[1298]: time="2025-05-17T00:42:37.671423382Z" level=info msg="shim disconnected" id=72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a May 17 00:42:37.671475 env[1298]: time="2025-05-17T00:42:37.671473939Z" level=warning msg="cleaning up after shim disconnected" id=72ec8e693e853bdcbd2f21d356edebe73ce3372f6e391cb71a1a24cc2e08eb2a namespace=k8s.io May 17 00:42:37.671475 env[1298]: time="2025-05-17T00:42:37.671483517Z" level=info msg="cleaning up dead shim" May 17 00:42:37.686410 env[1298]: time="2025-05-17T00:42:37.686363563Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2888 runtime=io.containerd.runc.v2\n" May 17 00:42:37.717220 kubelet[2089]: I0517 00:42:37.716636 2089 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:42:37.905686 kubelet[2089]: I0517 00:42:37.905549 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe17efee-1abf-4bdb-bcc2-aea4268fd8b1-calico-apiserver-certs\") pod \"calico-apiserver-7c44f8777-nv4xw\" (UID: \"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1\") " pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" May 17 00:42:37.905686 kubelet[2089]: I0517 00:42:37.905613 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b0969e4-be19-4aa6-ac52-0eb151d2ba1f-config-volume\") pod \"coredns-7c65d6cfc9-rghhh\" (UID: \"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f\") " pod="kube-system/coredns-7c65d6cfc9-rghhh" May 17 00:42:37.905686 kubelet[2089]: I0517 00:42:37.905648 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jtq\" (UniqueName: \"kubernetes.io/projected/1522b219-3da2-4360-a455-7590bb24be2f-kube-api-access-h2jtq\") pod \"coredns-7c65d6cfc9-cfghl\" (UID: \"1522b219-3da2-4360-a455-7590bb24be2f\") " pod="kube-system/coredns-7c65d6cfc9-cfghl" May 17 00:42:37.906361 kubelet[2089]: I0517 00:42:37.906234 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5jc\" (UniqueName: \"kubernetes.io/projected/9b0969e4-be19-4aa6-ac52-0eb151d2ba1f-kube-api-access-nt5jc\") pod \"coredns-7c65d6cfc9-rghhh\" (UID: \"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f\") " pod="kube-system/coredns-7c65d6cfc9-rghhh" May 17 00:42:37.906361 kubelet[2089]: I0517 00:42:37.906308 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1522b219-3da2-4360-a455-7590bb24be2f-config-volume\") pod \"coredns-7c65d6cfc9-cfghl\" (UID: \"1522b219-3da2-4360-a455-7590bb24be2f\") " pod="kube-system/coredns-7c65d6cfc9-cfghl" May 17 00:42:37.906361 kubelet[2089]: I0517 00:42:37.906344 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5xtg\" (UniqueName: \"kubernetes.io/projected/8def56ed-36b1-4f37-91fd-6252ab1906d1-kube-api-access-c5xtg\") pod \"calico-kube-controllers-7d946d6c9d-wf9nz\" (UID: \"8def56ed-36b1-4f37-91fd-6252ab1906d1\") " pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" May 17 00:42:37.906593 kubelet[2089]: I0517 00:42:37.906369 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrq8\" (UniqueName: \"kubernetes.io/projected/fe17efee-1abf-4bdb-bcc2-aea4268fd8b1-kube-api-access-whrq8\") pod \"calico-apiserver-7c44f8777-nv4xw\" (UID: \"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1\") " pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" May 17 00:42:37.906593 kubelet[2089]: I0517 00:42:37.906456 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2195973a-24fa-48aa-9e37-bf651df76422-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-w9g2b\" (UID: \"2195973a-24fa-48aa-9e37-bf651df76422\") " pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:37.906593 kubelet[2089]: I0517 00:42:37.906492 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-backend-key-pair\") pod \"whisker-c894c5f7c-mgcgc\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " pod="calico-system/whisker-c894c5f7c-mgcgc" May 17 00:42:37.906593 kubelet[2089]: I0517 00:42:37.906514 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-ca-bundle\") pod \"whisker-c894c5f7c-mgcgc\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " pod="calico-system/whisker-c894c5f7c-mgcgc" May 17 00:42:37.906593 kubelet[2089]: I0517 00:42:37.906536 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2195973a-24fa-48aa-9e37-bf651df76422-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-w9g2b\" (UID: \"2195973a-24fa-48aa-9e37-bf651df76422\") " pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:37.906824 kubelet[2089]: I0517 00:42:37.906560 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72820e23-e5e9-4de9-a5fc-0c6f661e245d-calico-apiserver-certs\") pod \"calico-apiserver-7c44f8777-hd5ct\" (UID: \"72820e23-e5e9-4de9-a5fc-0c6f661e245d\") " pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" May 17 00:42:37.906824 kubelet[2089]: I0517 00:42:37.906590 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2195973a-24fa-48aa-9e37-bf651df76422-config\") pod \"goldmane-8f77d7b6c-w9g2b\" (UID: \"2195973a-24fa-48aa-9e37-bf651df76422\") " pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:37.906824 kubelet[2089]: I0517 00:42:37.906613 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7vm\" (UniqueName: \"kubernetes.io/projected/72820e23-e5e9-4de9-a5fc-0c6f661e245d-kube-api-access-mp7vm\") pod \"calico-apiserver-7c44f8777-hd5ct\" (UID: \"72820e23-e5e9-4de9-a5fc-0c6f661e245d\") " pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" May 17 00:42:37.906824 kubelet[2089]: I0517 00:42:37.906644 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6qw\" (UniqueName: \"kubernetes.io/projected/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-kube-api-access-fh6qw\") pod \"whisker-c894c5f7c-mgcgc\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " pod="calico-system/whisker-c894c5f7c-mgcgc" May 17 00:42:37.906824 kubelet[2089]: I0517 00:42:37.906672 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2chb\" (UniqueName: \"kubernetes.io/projected/2195973a-24fa-48aa-9e37-bf651df76422-kube-api-access-c2chb\") pod \"goldmane-8f77d7b6c-w9g2b\" (UID: \"2195973a-24fa-48aa-9e37-bf651df76422\") " pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:37.907082 kubelet[2089]: I0517 00:42:37.906697 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8def56ed-36b1-4f37-91fd-6252ab1906d1-tigera-ca-bundle\") pod \"calico-kube-controllers-7d946d6c9d-wf9nz\" (UID: \"8def56ed-36b1-4f37-91fd-6252ab1906d1\") " pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" May 17 00:42:38.078220 env[1298]: time="2025-05-17T00:42:38.077881532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-w9g2b,Uid:2195973a-24fa-48aa-9e37-bf651df76422,Namespace:calico-system,Attempt:0,}" May 17 00:42:38.080834 env[1298]: time="2025-05-17T00:42:38.080648445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-nv4xw,Uid:fe17efee-1abf-4bdb-bcc2-aea4268fd8b1,Namespace:calico-apiserver,Attempt:0,}" May 17 00:42:38.083913 env[1298]: time="2025-05-17T00:42:38.083871476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-hd5ct,Uid:72820e23-e5e9-4de9-a5fc-0c6f661e245d,Namespace:calico-apiserver,Attempt:0,}" May 17 00:42:38.269347 env[1298]: time="2025-05-17T00:42:38.269148010Z" level=error msg="Failed to destroy network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.270276 env[1298]: time="2025-05-17T00:42:38.270214849Z" level=error msg="encountered an error cleaning up failed sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.270453 env[1298]: time="2025-05-17T00:42:38.270308598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-w9g2b,Uid:2195973a-24fa-48aa-9e37-bf651df76422,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.270641 kubelet[2089]: E0517 00:42:38.270590 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.270707 kubelet[2089]: E0517 00:42:38.270690 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:38.270747 kubelet[2089]: E0517 00:42:38.270713 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-w9g2b" May 17 00:42:38.270791 kubelet[2089]: E0517 00:42:38.270764 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:42:38.281843 env[1298]: time="2025-05-17T00:42:38.281749739Z" level=error msg="Failed to destroy network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.282170 env[1298]: time="2025-05-17T00:42:38.282131800Z" level=error msg="encountered an error cleaning up failed sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.282241 env[1298]: time="2025-05-17T00:42:38.282196644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-nv4xw,Uid:fe17efee-1abf-4bdb-bcc2-aea4268fd8b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.282526 kubelet[2089]: E0517 00:42:38.282482 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.282614 kubelet[2089]: E0517 00:42:38.282550 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" May 17 00:42:38.282614 kubelet[2089]: E0517 00:42:38.282580 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" May 17 00:42:38.282691 kubelet[2089]: E0517 00:42:38.282628 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c44f8777-nv4xw_calico-apiserver(fe17efee-1abf-4bdb-bcc2-aea4268fd8b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c44f8777-nv4xw_calico-apiserver(fe17efee-1abf-4bdb-bcc2-aea4268fd8b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" podUID="fe17efee-1abf-4bdb-bcc2-aea4268fd8b1" May 17 00:42:38.294873 env[1298]: time="2025-05-17T00:42:38.294798466Z" level=error msg="Failed to destroy network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.295477 env[1298]: time="2025-05-17T00:42:38.295436066Z" level=error msg="encountered an error cleaning up failed sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.295648 env[1298]: time="2025-05-17T00:42:38.295617023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-hd5ct,Uid:72820e23-e5e9-4de9-a5fc-0c6f661e245d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.297373 kubelet[2089]: E0517 00:42:38.296010 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.297373 kubelet[2089]: E0517 00:42:38.296068 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" May 17 00:42:38.297373 kubelet[2089]: E0517 00:42:38.296110 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" May 17 00:42:38.297582 kubelet[2089]: E0517 00:42:38.296160 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c44f8777-hd5ct_calico-apiserver(72820e23-e5e9-4de9-a5fc-0c6f661e245d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c44f8777-hd5ct_calico-apiserver(72820e23-e5e9-4de9-a5fc-0c6f661e245d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" podUID="72820e23-e5e9-4de9-a5fc-0c6f661e245d" May 17 00:42:38.348097 kubelet[2089]: E0517 00:42:38.346449 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:38.348306 env[1298]: time="2025-05-17T00:42:38.347602928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rghhh,Uid:9b0969e4-be19-4aa6-ac52-0eb151d2ba1f,Namespace:kube-system,Attempt:0,}" May 17 00:42:38.364515 kubelet[2089]: E0517 00:42:38.364473 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:38.367127 env[1298]: time="2025-05-17T00:42:38.367069792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfghl,Uid:1522b219-3da2-4360-a455-7590bb24be2f,Namespace:kube-system,Attempt:0,}" May 17 00:42:38.368020 env[1298]: time="2025-05-17T00:42:38.367990153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d946d6c9d-wf9nz,Uid:8def56ed-36b1-4f37-91fd-6252ab1906d1,Namespace:calico-system,Attempt:0,}" May 17 00:42:38.370088 env[1298]: time="2025-05-17T00:42:38.370057337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c894c5f7c-mgcgc,Uid:68f1c50f-28b1-41aa-a724-f90c88ad2e8d,Namespace:calico-system,Attempt:0,}" May 17 00:42:38.528418 env[1298]: time="2025-05-17T00:42:38.525431003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:42:38.538726 kubelet[2089]: I0517 00:42:38.538693 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:38.544542 env[1298]: time="2025-05-17T00:42:38.544501964Z" level=info msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" May 17 00:42:38.547785 kubelet[2089]: I0517 00:42:38.547583 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:38.548740 env[1298]: time="2025-05-17T00:42:38.548696639Z" level=info msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" May 17 00:42:38.550356 kubelet[2089]: I0517 00:42:38.550330 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:38.553557 env[1298]: time="2025-05-17T00:42:38.553518666Z" level=info msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" May 17 00:42:38.567815 env[1298]: time="2025-05-17T00:42:38.567752093Z" level=error msg="Failed to destroy network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.568395 env[1298]: time="2025-05-17T00:42:38.568342429Z" level=error msg="encountered an error cleaning up failed sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.568613 env[1298]: time="2025-05-17T00:42:38.568570233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rghhh,Uid:9b0969e4-be19-4aa6-ac52-0eb151d2ba1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.569011 kubelet[2089]: E0517 00:42:38.568930 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.569080 kubelet[2089]: E0517 00:42:38.569044 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rghhh" May 17 00:42:38.569125 kubelet[2089]: E0517 00:42:38.569077 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rghhh" May 17 00:42:38.569182 kubelet[2089]: E0517 00:42:38.569122 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rghhh_kube-system(9b0969e4-be19-4aa6-ac52-0eb151d2ba1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rghhh_kube-system(9b0969e4-be19-4aa6-ac52-0eb151d2ba1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rghhh" podUID="9b0969e4-be19-4aa6-ac52-0eb151d2ba1f" May 17 00:42:38.633977 env[1298]: time="2025-05-17T00:42:38.633910958Z" level=error msg="Failed to destroy network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.634347 env[1298]: time="2025-05-17T00:42:38.634287001Z" level=error msg="encountered an error cleaning up failed sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.634424 env[1298]: time="2025-05-17T00:42:38.634373024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c894c5f7c-mgcgc,Uid:68f1c50f-28b1-41aa-a724-f90c88ad2e8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.634781 kubelet[2089]: E0517 00:42:38.634731 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.634857 kubelet[2089]: E0517 00:42:38.634826 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c894c5f7c-mgcgc" May 17 00:42:38.634893 kubelet[2089]: E0517 00:42:38.634854 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c894c5f7c-mgcgc" May 17 00:42:38.634940 kubelet[2089]: E0517 00:42:38.634906 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c894c5f7c-mgcgc_calico-system(68f1c50f-28b1-41aa-a724-f90c88ad2e8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c894c5f7c-mgcgc_calico-system(68f1c50f-28b1-41aa-a724-f90c88ad2e8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c894c5f7c-mgcgc" podUID="68f1c50f-28b1-41aa-a724-f90c88ad2e8d" May 17 00:42:38.652788 env[1298]: time="2025-05-17T00:42:38.652710665Z" level=error msg="Failed to destroy network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.653306 env[1298]: time="2025-05-17T00:42:38.653227791Z" level=error msg="encountered an error cleaning up failed sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.653385 env[1298]: time="2025-05-17T00:42:38.653323869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfghl,Uid:1522b219-3da2-4360-a455-7590bb24be2f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.653686 kubelet[2089]: E0517 00:42:38.653624 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.653771 kubelet[2089]: E0517 00:42:38.653743 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-cfghl" May 17 00:42:38.653823 kubelet[2089]: E0517 00:42:38.653780 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-cfghl" May 17 00:42:38.655836 kubelet[2089]: E0517 00:42:38.654995 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-cfghl_kube-system(1522b219-3da2-4360-a455-7590bb24be2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-cfghl_kube-system(1522b219-3da2-4360-a455-7590bb24be2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-cfghl" podUID="1522b219-3da2-4360-a455-7590bb24be2f" May 17 00:42:38.676760 env[1298]: time="2025-05-17T00:42:38.676696522Z" level=error msg="Failed to destroy network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.677355 env[1298]: time="2025-05-17T00:42:38.677297638Z" level=error msg="encountered an error cleaning up failed sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.677560 env[1298]: time="2025-05-17T00:42:38.677523940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d946d6c9d-wf9nz,Uid:8def56ed-36b1-4f37-91fd-6252ab1906d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.679578 kubelet[2089]: E0517 00:42:38.679531 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.680927 kubelet[2089]: E0517 00:42:38.679642 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" May 17 00:42:38.680927 kubelet[2089]: E0517 00:42:38.679669 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" May 17 00:42:38.684287 kubelet[2089]: E0517 00:42:38.684227 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d946d6c9d-wf9nz_calico-system(8def56ed-36b1-4f37-91fd-6252ab1906d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d946d6c9d-wf9nz_calico-system(8def56ed-36b1-4f37-91fd-6252ab1906d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" podUID="8def56ed-36b1-4f37-91fd-6252ab1906d1" May 17 00:42:38.704969 env[1298]: time="2025-05-17T00:42:38.704905072Z" level=error msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" failed" error="failed to destroy network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.705246 kubelet[2089]: E0517 00:42:38.705194 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:38.705336 kubelet[2089]: E0517 00:42:38.705272 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e"} May 17 00:42:38.705378 kubelet[2089]: E0517 00:42:38.705346 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2195973a-24fa-48aa-9e37-bf651df76422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:38.705454 kubelet[2089]: E0517 00:42:38.705393 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2195973a-24fa-48aa-9e37-bf651df76422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:42:38.705555 env[1298]: time="2025-05-17T00:42:38.705514760Z" level=error msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" failed" error="failed to destroy network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.705725 kubelet[2089]: E0517 00:42:38.705682 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:38.705787 kubelet[2089]: E0517 00:42:38.705732 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2"} May 17 00:42:38.705787 kubelet[2089]: E0517 00:42:38.705778 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:38.705876 kubelet[2089]: E0517 00:42:38.705797 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" podUID="fe17efee-1abf-4bdb-bcc2-aea4268fd8b1" May 17 00:42:38.718660 env[1298]: time="2025-05-17T00:42:38.718599696Z" level=error msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" failed" error="failed to destroy network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:38.719189 kubelet[2089]: E0517 00:42:38.719120 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:38.719411 kubelet[2089]: E0517 00:42:38.719209 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07"} May 17 00:42:38.719411 kubelet[2089]: E0517 00:42:38.719286 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72820e23-e5e9-4de9-a5fc-0c6f661e245d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:38.719592 kubelet[2089]: E0517 00:42:38.719443 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72820e23-e5e9-4de9-a5fc-0c6f661e245d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" podUID="72820e23-e5e9-4de9-a5fc-0c6f661e245d" May 17 00:42:39.355980 env[1298]: time="2025-05-17T00:42:39.355920509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrqvm,Uid:cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb,Namespace:calico-system,Attempt:0,}" May 17 00:42:39.440231 env[1298]: time="2025-05-17T00:42:39.440163837Z" level=error msg="Failed to destroy network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.443547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7-shm.mount: Deactivated successfully. May 17 00:42:39.445543 env[1298]: time="2025-05-17T00:42:39.445476893Z" level=error msg="encountered an error cleaning up failed sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.445712 env[1298]: time="2025-05-17T00:42:39.445679979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrqvm,Uid:cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.446733 kubelet[2089]: E0517 00:42:39.446098 2089 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.446733 kubelet[2089]: E0517 00:42:39.446182 2089 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:39.446733 kubelet[2089]: E0517 00:42:39.446205 2089 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xrqvm" May 17 00:42:39.447211 kubelet[2089]: E0517 00:42:39.446271 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xrqvm_calico-system(cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xrqvm_calico-system(cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:39.554103 kubelet[2089]: I0517 00:42:39.553038 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:39.554673 env[1298]: time="2025-05-17T00:42:39.554622791Z" level=info msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" May 17 00:42:39.557196 kubelet[2089]: I0517 00:42:39.556646 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:39.557551 env[1298]: time="2025-05-17T00:42:39.557501333Z" level=info msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" May 17 00:42:39.562346 kubelet[2089]: I0517 00:42:39.561899 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:39.562926 env[1298]: time="2025-05-17T00:42:39.562886540Z" level=info msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" May 17 00:42:39.567537 kubelet[2089]: I0517 00:42:39.566805 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:39.567980 env[1298]: time="2025-05-17T00:42:39.567933204Z" level=info msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" May 17 00:42:39.570317 kubelet[2089]: I0517 00:42:39.569965 2089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:39.570922 env[1298]: time="2025-05-17T00:42:39.570881164Z" level=info msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" May 17 00:42:39.673591 env[1298]: time="2025-05-17T00:42:39.673525227Z" level=error msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" failed" error="failed to destroy network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.674187 kubelet[2089]: E0517 00:42:39.674022 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:39.674187 kubelet[2089]: E0517 00:42:39.674079 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc"} May 17 00:42:39.674187 kubelet[2089]: E0517 00:42:39.674115 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1522b219-3da2-4360-a455-7590bb24be2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:39.674187 kubelet[2089]: E0517 00:42:39.674140 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1522b219-3da2-4360-a455-7590bb24be2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-cfghl" podUID="1522b219-3da2-4360-a455-7590bb24be2f" May 17 00:42:39.690860 env[1298]: time="2025-05-17T00:42:39.690803280Z" level=error msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" failed" error="failed to destroy network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.691513 kubelet[2089]: E0517 00:42:39.691267 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:39.691513 kubelet[2089]: E0517 00:42:39.691343 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97"} May 17 00:42:39.691513 kubelet[2089]: E0517 00:42:39.691383 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8def56ed-36b1-4f37-91fd-6252ab1906d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:39.691513 kubelet[2089]: E0517 00:42:39.691419 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8def56ed-36b1-4f37-91fd-6252ab1906d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" podUID="8def56ed-36b1-4f37-91fd-6252ab1906d1" May 17 00:42:39.696057 env[1298]: time="2025-05-17T00:42:39.695997735Z" level=error msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" failed" error="failed to destroy network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.696793 kubelet[2089]: E0517 00:42:39.696597 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:39.696793 kubelet[2089]: E0517 00:42:39.696654 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5"} May 17 00:42:39.696793 kubelet[2089]: E0517 00:42:39.696698 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:39.696793 kubelet[2089]: E0517 00:42:39.696734 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rghhh" podUID="9b0969e4-be19-4aa6-ac52-0eb151d2ba1f" May 17 00:42:39.697449 env[1298]: time="2025-05-17T00:42:39.697390246Z" level=error msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" failed" error="failed to destroy network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.697873 kubelet[2089]: E0517 00:42:39.697759 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:39.697873 kubelet[2089]: E0517 00:42:39.697805 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b"} May 17 00:42:39.698115 kubelet[2089]: E0517 00:42:39.698048 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:39.698316 kubelet[2089]: E0517 00:42:39.698246 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c894c5f7c-mgcgc" podUID="68f1c50f-28b1-41aa-a724-f90c88ad2e8d" May 17 00:42:39.711801 env[1298]: time="2025-05-17T00:42:39.711675504Z" level=error msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" failed" error="failed to destroy network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:42:39.712482 kubelet[2089]: E0517 00:42:39.712248 2089 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:39.712482 kubelet[2089]: E0517 00:42:39.712354 2089 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7"} May 17 00:42:39.712482 kubelet[2089]: E0517 00:42:39.712407 2089 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:42:39.712482 kubelet[2089]: E0517 00:42:39.712438 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xrqvm" podUID="cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb" May 17 00:42:45.212908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653240859.mount: Deactivated successfully. May 17 00:42:45.252889 env[1298]: time="2025-05-17T00:42:45.252816013Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:45.254652 env[1298]: time="2025-05-17T00:42:45.254597977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:45.256279 env[1298]: time="2025-05-17T00:42:45.256154992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:45.257592 env[1298]: time="2025-05-17T00:42:45.257551494Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:45.258188 env[1298]: time="2025-05-17T00:42:45.258152288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:42:45.343339 env[1298]: time="2025-05-17T00:42:45.343264780Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:42:45.369750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281633258.mount: Deactivated successfully. May 17 00:42:45.376145 env[1298]: time="2025-05-17T00:42:45.376069897Z" level=info msg="CreateContainer within sandbox \"177c0e9459d807458a5e0296c81f275dc0911a9a0587612872036d76959c970c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8\"" May 17 00:42:45.380790 env[1298]: time="2025-05-17T00:42:45.380727957Z" level=info msg="StartContainer for \"1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8\"" May 17 00:42:45.488209 env[1298]: time="2025-05-17T00:42:45.485158092Z" level=info msg="StartContainer for \"1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8\" returns successfully" May 17 00:42:45.852165 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:42:45.853017 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:42:46.102412 kubelet[2089]: I0517 00:42:46.097849 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cjrmp" podStartSLOduration=2.101553437 podStartE2EDuration="19.096879416s" podCreationTimestamp="2025-05-17 00:42:27 +0000 UTC" firstStartedPulling="2025-05-17 00:42:28.264482832 +0000 UTC m=+19.328297658" lastFinishedPulling="2025-05-17 00:42:45.259808813 +0000 UTC m=+36.323623637" observedRunningTime="2025-05-17 00:42:45.655220865 +0000 UTC m=+36.719035708" watchObservedRunningTime="2025-05-17 00:42:46.096879416 +0000 UTC m=+37.160694257" May 17 00:42:46.108973 env[1298]: time="2025-05-17T00:42:46.108926505Z" level=info msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.261 [INFO][3344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.261 [INFO][3344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" iface="eth0" netns="/var/run/netns/cni-4084b392-0313-a078-f3d9-62e104daac0d" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.262 [INFO][3344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" iface="eth0" netns="/var/run/netns/cni-4084b392-0313-a078-f3d9-62e104daac0d" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.262 [INFO][3344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" iface="eth0" netns="/var/run/netns/cni-4084b392-0313-a078-f3d9-62e104daac0d" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.263 [INFO][3344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.263 [INFO][3344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.471 [INFO][3351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.473 [INFO][3351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.474 [INFO][3351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.490 [WARNING][3351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.490 [INFO][3351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.492 [INFO][3351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:46.498441 env[1298]: 2025-05-17 00:42:46.495 [INFO][3344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:42:46.502867 env[1298]: time="2025-05-17T00:42:46.502397226Z" level=info msg="TearDown network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" successfully" May 17 00:42:46.502867 env[1298]: time="2025-05-17T00:42:46.502456382Z" level=info msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" returns successfully" May 17 00:42:46.504996 systemd[1]: run-netns-cni\x2d4084b392\x2d0313\x2da078\x2df3d9\x2d62e104daac0d.mount: Deactivated successfully. May 17 00:42:46.548524 kubelet[2089]: I0517 00:42:46.548447 2089 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-ca-bundle\") pod \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " May 17 00:42:46.548764 kubelet[2089]: I0517 00:42:46.548553 2089 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-backend-key-pair\") pod \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " May 17 00:42:46.548764 kubelet[2089]: I0517 00:42:46.548613 2089 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh6qw\" (UniqueName: \"kubernetes.io/projected/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-kube-api-access-fh6qw\") pod \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\" (UID: \"68f1c50f-28b1-41aa-a724-f90c88ad2e8d\") " May 17 00:42:46.555151 kubelet[2089]: I0517 00:42:46.553593 2089 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "68f1c50f-28b1-41aa-a724-f90c88ad2e8d" (UID: "68f1c50f-28b1-41aa-a724-f90c88ad2e8d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:42:46.570321 systemd[1]: var-lib-kubelet-pods-68f1c50f\x2d28b1\x2d41aa\x2da724\x2df90c88ad2e8d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:42:46.577067 kubelet[2089]: I0517 00:42:46.577006 2089 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "68f1c50f-28b1-41aa-a724-f90c88ad2e8d" (UID: "68f1c50f-28b1-41aa-a724-f90c88ad2e8d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:42:46.577434 kubelet[2089]: I0517 00:42:46.577182 2089 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-kube-api-access-fh6qw" (OuterVolumeSpecName: "kube-api-access-fh6qw") pod "68f1c50f-28b1-41aa-a724-f90c88ad2e8d" (UID: "68f1c50f-28b1-41aa-a724-f90c88ad2e8d"). InnerVolumeSpecName "kube-api-access-fh6qw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:42:46.581681 systemd[1]: var-lib-kubelet-pods-68f1c50f\x2d28b1\x2d41aa\x2da724\x2df90c88ad2e8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfh6qw.mount: Deactivated successfully. May 17 00:42:46.650360 kubelet[2089]: I0517 00:42:46.649551 2089 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-ca-bundle\") on node \"ci-3510.3.7-n-9c3fefbd06\" DevicePath \"\"" May 17 00:42:46.650360 kubelet[2089]: I0517 00:42:46.649629 2089 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-whisker-backend-key-pair\") on node \"ci-3510.3.7-n-9c3fefbd06\" DevicePath \"\"" May 17 00:42:46.650360 kubelet[2089]: I0517 00:42:46.649645 2089 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh6qw\" (UniqueName: \"kubernetes.io/projected/68f1c50f-28b1-41aa-a724-f90c88ad2e8d-kube-api-access-fh6qw\") on node \"ci-3510.3.7-n-9c3fefbd06\" DevicePath \"\"" May 17 00:42:46.863917 kubelet[2089]: I0517 00:42:46.862616 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9ff357f-535d-47da-b193-4f0d8aef3d06-whisker-ca-bundle\") pod \"whisker-667b98bdd5-df8g2\" (UID: \"f9ff357f-535d-47da-b193-4f0d8aef3d06\") " pod="calico-system/whisker-667b98bdd5-df8g2" May 17 00:42:46.863917 kubelet[2089]: I0517 00:42:46.862698 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhg5\" (UniqueName: \"kubernetes.io/projected/f9ff357f-535d-47da-b193-4f0d8aef3d06-kube-api-access-2jhg5\") pod \"whisker-667b98bdd5-df8g2\" (UID: \"f9ff357f-535d-47da-b193-4f0d8aef3d06\") " pod="calico-system/whisker-667b98bdd5-df8g2" May 17 00:42:46.863917 kubelet[2089]: I0517 00:42:46.862720 2089 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9ff357f-535d-47da-b193-4f0d8aef3d06-whisker-backend-key-pair\") pod \"whisker-667b98bdd5-df8g2\" (UID: \"f9ff357f-535d-47da-b193-4f0d8aef3d06\") " pod="calico-system/whisker-667b98bdd5-df8g2" May 17 00:42:47.075800 env[1298]: time="2025-05-17T00:42:47.075744828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667b98bdd5-df8g2,Uid:f9ff357f-535d-47da-b193-4f0d8aef3d06,Namespace:calico-system,Attempt:0,}" May 17 00:42:47.223263 systemd[1]: run-containerd-runc-k8s.io-1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8-runc.GZL3jG.mount: Deactivated successfully. May 17 00:42:47.243468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:47.243670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6df83b8771b: link becomes ready May 17 00:42:47.246197 systemd-networkd[1062]: cali6df83b8771b: Link UP May 17 00:42:47.246528 systemd-networkd[1062]: cali6df83b8771b: Gained carrier May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.112 [INFO][3394] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.128 [INFO][3394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0 whisker-667b98bdd5- calico-system f9ff357f-535d-47da-b193-4f0d8aef3d06 907 0 2025-05-17 00:42:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:667b98bdd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 whisker-667b98bdd5-df8g2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6df83b8771b [] [] }} ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.128 [INFO][3394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.166 [INFO][3407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" HandleID="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.166 [INFO][3407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" HandleID="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"whisker-667b98bdd5-df8g2", "timestamp":"2025-05-17 00:42:47.166642033 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.167 [INFO][3407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.167 [INFO][3407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.167 [INFO][3407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.177 [INFO][3407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.187 [INFO][3407] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.193 [INFO][3407] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.196 [INFO][3407] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.199 [INFO][3407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.200 [INFO][3407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.202 [INFO][3407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.207 [INFO][3407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.219 [INFO][3407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.1/26] block=192.168.99.0/26 handle="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.219 [INFO][3407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.1/26] handle="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.219 [INFO][3407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:47.268902 env[1298]: 2025-05-17 00:42:47.219 [INFO][3407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.1/26] IPv6=[] ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" HandleID="k8s-pod-network.ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.229 [INFO][3394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0", GenerateName:"whisker-667b98bdd5-", Namespace:"calico-system", SelfLink:"", UID:"f9ff357f-535d-47da-b193-4f0d8aef3d06", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"667b98bdd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"whisker-667b98bdd5-df8g2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6df83b8771b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.229 [INFO][3394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.1/32] ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.229 [INFO][3394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6df83b8771b ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.244 [INFO][3394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.247 [INFO][3394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0", GenerateName:"whisker-667b98bdd5-", Namespace:"calico-system", SelfLink:"", UID:"f9ff357f-535d-47da-b193-4f0d8aef3d06", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"667b98bdd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b", Pod:"whisker-667b98bdd5-df8g2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6df83b8771b", MAC:"32:b6:d7:03:fc:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:47.271155 env[1298]: 2025-05-17 00:42:47.263 [INFO][3394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b" Namespace="calico-system" Pod="whisker-667b98bdd5-df8g2" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--667b98bdd5--df8g2-eth0" May 17 00:42:47.288601 env[1298]: time="2025-05-17T00:42:47.288376347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:47.288601 env[1298]: time="2025-05-17T00:42:47.288505999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:47.288601 env[1298]: time="2025-05-17T00:42:47.288519346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:47.295794 env[1298]: time="2025-05-17T00:42:47.289266386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b pid=3429 runtime=io.containerd.runc.v2 May 17 00:42:47.359301 kubelet[2089]: I0517 00:42:47.358419 2089 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f1c50f-28b1-41aa-a724-f90c88ad2e8d" path="/var/lib/kubelet/pods/68f1c50f-28b1-41aa-a724-f90c88ad2e8d/volumes" May 17 00:42:47.369969 env[1298]: time="2025-05-17T00:42:47.369633195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667b98bdd5-df8g2,Uid:f9ff357f-535d-47da-b193-4f0d8aef3d06,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec15baa68722169bfcb8dad12950ebe71c6adc5a3ed6066a452209058d29753b\"" May 17 00:42:47.375119 env[1298]: time="2025-05-17T00:42:47.374483610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:42:47.613418 env[1298]: time="2025-05-17T00:42:47.613215611Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:42:47.618372 env[1298]: time="2025-05-17T00:42:47.617770566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:42:47.622129 kubelet[2089]: E0517 00:42:47.618077 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:42:47.622129 kubelet[2089]: E0517 00:42:47.618145 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:42:47.630434 kubelet[2089]: E0517 00:42:47.629851 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8231cfdb1ec4b548cb8c42dc4784a49,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:42:47.632253 env[1298]: time="2025-05-17T00:42:47.632169311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:42:47.682000 audit[3498]: AVC avc: denied { write } for pid=3498 comm="tee" name="fd" dev="proc" ino=25014 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.682000 audit[3498]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff089f37ce a2=241 a3=1b6 items=1 ppid=3479 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.704556 kernel: audit: type=1400 audit(1747442567.682:306): avc: denied { write } for pid=3498 comm="tee" name="fd" dev="proc" ino=25014 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.704965 kernel: audit: type=1300 audit(1747442567.682:306): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff089f37ce a2=241 a3=1b6 items=1 ppid=3479 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.682000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:42:47.682000 audit: PATH item=0 name="/dev/fd/63" inode=24573 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.708938 kernel: audit: type=1307 audit(1747442567.682:306): cwd="/etc/service/enabled/cni/log" May 17 00:42:47.709004 kernel: audit: type=1302 audit(1747442567.682:306): item=0 name="/dev/fd/63" inode=24573 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.712000 audit[3525]: AVC avc: denied { write } for pid=3525 comm="tee" name="fd" dev="proc" ino=25627 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.724725 kernel: audit: type=1327 audit(1747442567.682:306): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.724849 kernel: audit: type=1400 audit(1747442567.712:307): avc: denied { write } for pid=3525 comm="tee" name="fd" dev="proc" ino=25627 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.712000 audit[3525]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff627677cc a2=241 a3=1b6 items=1 ppid=3505 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.731124 kernel: audit: type=1300 audit(1747442567.712:307): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff627677cc a2=241 a3=1b6 items=1 ppid=3505 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.731211 kernel: audit: type=1307 audit(1747442567.712:307): cwd="/etc/service/enabled/confd/log" May 17 00:42:47.712000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:42:47.712000 audit: PATH item=0 name="/dev/fd/63" inode=25622 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.736654 kernel: audit: type=1302 audit(1747442567.712:307): item=0 name="/dev/fd/63" inode=25622 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.736793 kernel: audit: type=1327 audit(1747442567.712:307): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.718000 audit[3523]: AVC avc: denied { write } for pid=3523 comm="tee" name="fd" dev="proc" ino=25637 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.718000 audit[3523]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe868c47bd a2=241 a3=1b6 items=1 ppid=3502 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.718000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:42:47.718000 audit: PATH item=0 name="/dev/fd/63" inode=25621 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.764000 audit[3551]: AVC avc: denied { write } for pid=3551 comm="tee" name="fd" dev="proc" ino=25652 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.764000 audit[3551]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcce3487bc a2=241 a3=1b6 items=1 ppid=3510 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.764000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:42:47.764000 audit: PATH item=0 name="/dev/fd/63" inode=25649 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.764000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.766000 audit[3545]: AVC avc: denied { write } for pid=3545 comm="tee" name="fd" dev="proc" ino=25037 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.766000 audit[3545]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd106e67cd a2=241 a3=1b6 items=1 ppid=3482 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.766000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:42:47.766000 audit: PATH item=0 name="/dev/fd/63" inode=25028 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.772000 audit[3543]: AVC avc: denied { write } for pid=3543 comm="tee" name="fd" dev="proc" ino=25042 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.772000 audit[3543]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd63e597cc a2=241 a3=1b6 items=1 ppid=3499 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.772000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:42:47.772000 audit: PATH item=0 name="/dev/fd/63" inode=25027 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.779000 audit[3548]: AVC avc: denied { write } for pid=3548 comm="tee" name="fd" dev="proc" ino=25046 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:42:47.779000 audit[3548]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe099ea7cc a2=241 a3=1b6 items=1 ppid=3489 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.779000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:42:47.779000 audit: PATH item=0 name="/dev/fd/63" inode=25034 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:42:47.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:42:47.850439 env[1298]: time="2025-05-17T00:42:47.850237200Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:42:47.851569 env[1298]: time="2025-05-17T00:42:47.851378078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:42:47.852371 kubelet[2089]: E0517 00:42:47.851740 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:42:47.852371 kubelet[2089]: E0517 00:42:47.851847 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:42:47.852581 kubelet[2089]: E0517 00:42:47.852052 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:42:47.853502 kubelet[2089]: E0517 00:42:47.853451 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:42:48.215646 systemd[1]: run-containerd-runc-k8s.io-1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8-runc.vwoaz2.mount: Deactivated successfully. May 17 00:42:48.259000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.259000 audit: BPF prog-id=10 op=LOAD May 17 00:42:48.259000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd63895dd0 a2=98 a3=3 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.259000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.262000 audit: BPF prog-id=10 op=UNLOAD May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit: BPF prog-id=11 op=LOAD May 17 00:42:48.264000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd63895bc0 a2=94 a3=54428f items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.264000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.264000 audit: BPF prog-id=11 op=UNLOAD May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.264000 audit: BPF prog-id=12 op=LOAD May 17 00:42:48.264000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd63895bf0 a2=94 a3=2 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.264000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.264000 audit: BPF prog-id=12 op=UNLOAD May 17 00:42:48.422000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.444700 systemd-networkd[1062]: cali6df83b8771b: Gained IPv6LL May 17 00:42:48.422000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.422000 audit: BPF prog-id=13 op=LOAD May 17 00:42:48.422000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd63895ab0 a2=94 a3=1 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.448000 audit: BPF prog-id=13 op=UNLOAD May 17 00:42:48.448000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.448000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd63895b80 a2=50 a3=7ffd63895c60 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.448000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.464000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.464000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895ac0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.464000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.464000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.464000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd63895af0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.464000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd63895a00 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895b10 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895af0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895ae0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895b10 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd63895af0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd63895b10 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd63895ae0 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.465000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.465000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd63895b50 a2=28 a3=0 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd63895900 a2=50 a3=1 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit: BPF prog-id=14 op=LOAD May 17 00:42:48.466000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd63895900 a2=94 a3=5 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.466000 audit: BPF prog-id=14 op=UNLOAD May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd638959b0 a2=50 a3=1 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd63895ad0 a2=4 a3=38 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.466000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.466000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd63895b20 a2=94 a3=6 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.467000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd638952d0 a2=94 a3=88 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.467000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { perfmon } for pid=3596 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { bpf } for pid=3596 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.467000 audit[3596]: AVC avc: denied { confidentiality } for pid=3596 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.467000 audit[3596]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd638952d0 a2=94 a3=88 items=0 ppid=3490 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.467000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit: BPF prog-id=15 op=LOAD May 17 00:42:48.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce0f3900 a2=98 a3=1999999999999999 items=0 ppid=3490 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.483000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:42:48.483000 audit: BPF prog-id=15 op=UNLOAD May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit: BPF prog-id=16 op=LOAD May 17 00:42:48.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce0f37e0 a2=94 a3=ffff items=0 ppid=3490 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.483000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:42:48.483000 audit: BPF prog-id=16 op=UNLOAD May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { perfmon } for pid=3601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit[3601]: AVC avc: denied { bpf } for pid=3601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.483000 audit: BPF prog-id=17 op=LOAD May 17 00:42:48.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdce0f3820 a2=94 a3=7ffdce0f3a00 items=0 ppid=3490 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.483000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:42:48.483000 audit: BPF prog-id=17 op=UNLOAD May 17 00:42:48.564240 systemd-networkd[1062]: vxlan.calico: Link UP May 17 00:42:48.564246 systemd-networkd[1062]: vxlan.calico: Gained carrier May 17 00:42:48.599322 kubelet[2089]: E0517 00:42:48.598426 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit: BPF prog-id=18 op=LOAD May 17 00:42:48.606000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceaf5f7e0 a2=98 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.606000 audit: BPF prog-id=18 op=UNLOAD May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.606000 audit: BPF prog-id=19 op=LOAD May 17 00:42:48.606000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceaf5f5f0 a2=94 a3=54428f items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.606000 audit: BPF prog-id=19 op=UNLOAD May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit: BPF prog-id=20 op=LOAD May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceaf5f620 a2=94 a3=2 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit: BPF prog-id=20 op=UNLOAD May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f4f0 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceaf5f520 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceaf5f430 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f540 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f520 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f510 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f540 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceaf5f520 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceaf5f540 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceaf5f510 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffceaf5f580 a2=28 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit: BPF prog-id=21 op=LOAD May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceaf5f3f0 a2=94 a3=0 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit: BPF prog-id=21 op=UNLOAD May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffceaf5f3e0 a2=50 a3=2800 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.607000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.607000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffceaf5f3e0 a2=50 a3=2800 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit: BPF prog-id=22 op=LOAD May 17 00:42:48.608000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceaf5ec00 a2=94 a3=2 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.608000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.608000 audit: BPF prog-id=22 op=UNLOAD May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { perfmon } for pid=3626 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit[3626]: AVC avc: denied { bpf } for pid=3626 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.608000 audit: BPF prog-id=23 op=LOAD May 17 00:42:48.608000 audit[3626]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceaf5ed00 a2=94 a3=30 items=0 ppid=3490 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.608000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.629000 audit: BPF prog-id=24 op=LOAD May 17 00:42:48.629000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffceec60370 a2=98 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.629000 audit: BPF prog-id=24 op=UNLOAD May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit: BPF prog-id=25 op=LOAD May 17 00:42:48.633000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffceec60160 a2=94 a3=54428f items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.633000 audit: BPF prog-id=25 op=UNLOAD May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.633000 audit: BPF prog-id=26 op=LOAD May 17 00:42:48.633000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffceec60190 a2=94 a3=2 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.633000 audit: BPF prog-id=26 op=UNLOAD May 17 00:42:48.663000 audit[3634]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:48.663000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff961fc520 a2=0 a3=7fff961fc50c items=0 ppid=2190 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:48.667000 audit[3634]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=3634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:48.667000 audit[3634]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff961fc520 a2=0 a3=0 items=0 ppid=2190 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit: BPF prog-id=27 op=LOAD May 17 00:42:48.774000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffceec60050 a2=94 a3=1 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.774000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.774000 audit: BPF prog-id=27 op=UNLOAD May 17 00:42:48.774000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.774000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffceec60120 a2=50 a3=7ffceec60200 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.774000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec60060 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceec60090 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceec5ffa0 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec600b0 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec60090 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec60080 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec600b0 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceec60090 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceec600b0 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffceec60080 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffceec600f0 a2=28 a3=0 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffceec5fea0 a2=50 a3=1 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.787000 audit: BPF prog-id=28 op=LOAD May 17 00:42:48.787000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffceec5fea0 a2=94 a3=5 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.787000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.788000 audit: BPF prog-id=28 op=UNLOAD May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffceec5ff50 a2=50 a3=1 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffceec60070 a2=4 a3=38 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { confidentiality } for pid=3629 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.788000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffceec600c0 a2=94 a3=6 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { confidentiality } for pid=3629 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.788000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffceec5f870 a2=94 a3=88 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { perfmon } for pid=3629 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.788000 audit[3629]: AVC avc: denied { confidentiality } for pid=3629 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:42:48.788000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffceec5f870 a2=94 a3=88 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.788000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.789000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.789000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffceec612a0 a2=10 a3=f8f00800 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.789000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.789000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffceec61140 a2=10 a3=3 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.789000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.789000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffceec610e0 a2=10 a3=3 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.789000 audit[3629]: AVC avc: denied { bpf } for pid=3629 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:48.789000 audit[3629]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffceec610e0 a2=10 a3=7 items=0 ppid=3490 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.789000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:42:48.797000 audit: BPF prog-id=23 op=UNLOAD May 17 00:42:48.892000 audit[3660]: NETFILTER_CFG table=nat:103 family=2 entries=15 op=nft_register_chain pid=3660 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:48.892000 audit[3660]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffd0d02a10 a2=0 a3=7fffd0d029fc items=0 ppid=3490 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:48.892000 audit[3661]: NETFILTER_CFG table=mangle:104 family=2 entries=16 op=nft_register_chain pid=3661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:48.892000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeac2787c0 a2=0 a3=7ffeac2787ac items=0 ppid=3490 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:48.900000 audit[3659]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:48.900000 audit[3659]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffec8341930 a2=0 a3=7ffec834191c items=0 ppid=3490 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.900000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:48.906000 audit[3662]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:48.906000 audit[3662]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffdf6310ef0 a2=0 a3=7ffdf6310edc items=0 ppid=3490 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.906000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:49.355085 env[1298]: time="2025-05-17T00:42:49.355016907Z" level=info msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.419 [INFO][3685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.420 [INFO][3685] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" iface="eth0" netns="/var/run/netns/cni-dd7e491f-6df5-eee5-9803-b6a3c95bd548" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.420 [INFO][3685] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" iface="eth0" netns="/var/run/netns/cni-dd7e491f-6df5-eee5-9803-b6a3c95bd548" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.421 [INFO][3685] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" iface="eth0" netns="/var/run/netns/cni-dd7e491f-6df5-eee5-9803-b6a3c95bd548" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.421 [INFO][3685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.421 [INFO][3685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.451 [INFO][3692] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.451 [INFO][3692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.452 [INFO][3692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.459 [WARNING][3692] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.459 [INFO][3692] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.462 [INFO][3692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:49.467169 env[1298]: 2025-05-17 00:42:49.464 [INFO][3685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:42:49.471027 env[1298]: time="2025-05-17T00:42:49.470969650Z" level=info msg="TearDown network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" successfully" May 17 00:42:49.471196 env[1298]: time="2025-05-17T00:42:49.471176927Z" level=info msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" returns successfully" May 17 00:42:49.472438 systemd[1]: run-netns-cni\x2ddd7e491f\x2d6df5\x2deee5\x2d9803\x2db6a3c95bd548.mount: Deactivated successfully. May 17 00:42:49.474812 env[1298]: time="2025-05-17T00:42:49.474776508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-nv4xw,Uid:fe17efee-1abf-4bdb-bcc2-aea4268fd8b1,Namespace:calico-apiserver,Attempt:1,}" May 17 00:42:49.698999 systemd-networkd[1062]: calic50c88da89b: Link UP May 17 00:42:49.702254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:49.702437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic50c88da89b: link becomes ready May 17 00:42:49.702607 systemd-networkd[1062]: calic50c88da89b: Gained carrier May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.557 [INFO][3698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0 calico-apiserver-7c44f8777- calico-apiserver fe17efee-1abf-4bdb-bcc2-aea4268fd8b1 930 0 2025-05-17 00:42:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c44f8777 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 calico-apiserver-7c44f8777-nv4xw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic50c88da89b [] [] }} ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.558 [INFO][3698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.612 [INFO][3712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" HandleID="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.613 [INFO][3712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" HandleID="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"calico-apiserver-7c44f8777-nv4xw", "timestamp":"2025-05-17 00:42:49.612933265 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.613 [INFO][3712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.613 [INFO][3712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.613 [INFO][3712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.623 [INFO][3712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.632 [INFO][3712] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.640 [INFO][3712] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.643 [INFO][3712] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.648 [INFO][3712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.649 [INFO][3712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.652 [INFO][3712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702 May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.662 [INFO][3712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.678 [INFO][3712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.2/26] block=192.168.99.0/26 handle="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.678 [INFO][3712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.2/26] handle="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.678 [INFO][3712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:49.721383 env[1298]: 2025-05-17 00:42:49.678 [INFO][3712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.2/26] IPv6=[] ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" HandleID="k8s-pod-network.1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.693 [INFO][3698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"calico-apiserver-7c44f8777-nv4xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50c88da89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.694 [INFO][3698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.2/32] ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.694 [INFO][3698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic50c88da89b ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.703 [INFO][3698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.704 [INFO][3698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702", Pod:"calico-apiserver-7c44f8777-nv4xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50c88da89b", MAC:"72:ed:97:d9:ce:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:49.722228 env[1298]: 2025-05-17 00:42:49.714 [INFO][3698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-nv4xw" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:42:49.742732 env[1298]: time="2025-05-17T00:42:49.742529224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:49.742732 env[1298]: time="2025-05-17T00:42:49.742643287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:49.742732 env[1298]: time="2025-05-17T00:42:49.742655702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:49.743072 env[1298]: time="2025-05-17T00:42:49.742958402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702 pid=3736 runtime=io.containerd.runc.v2 May 17 00:42:49.747000 audit[3737]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:49.747000 audit[3737]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe868a9130 a2=0 a3=7ffe868a911c items=0 ppid=3490 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:49.747000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:49.832263 env[1298]: time="2025-05-17T00:42:49.832200144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-nv4xw,Uid:fe17efee-1abf-4bdb-bcc2-aea4268fd8b1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702\"" May 17 00:42:49.836772 env[1298]: time="2025-05-17T00:42:49.836674014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:42:50.353393 env[1298]: time="2025-05-17T00:42:50.352702725Z" level=info msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" May 17 00:42:50.354321 env[1298]: time="2025-05-17T00:42:50.354259821Z" level=info msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.439 [INFO][3785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.440 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" iface="eth0" netns="/var/run/netns/cni-5eb2a8a4-438a-b865-df8a-4f7173e24c67" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.441 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" iface="eth0" netns="/var/run/netns/cni-5eb2a8a4-438a-b865-df8a-4f7173e24c67" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.441 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" iface="eth0" netns="/var/run/netns/cni-5eb2a8a4-438a-b865-df8a-4f7173e24c67" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.441 [INFO][3785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.441 [INFO][3785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.508 [INFO][3804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.508 [INFO][3804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.508 [INFO][3804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.517 [WARNING][3804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.517 [INFO][3804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.519 [INFO][3804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:50.526867 env[1298]: 2025-05-17 00:42:50.524 [INFO][3785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:42:50.532579 systemd[1]: run-netns-cni\x2d5eb2a8a4\x2d438a\x2db865\x2ddf8a\x2d4f7173e24c67.mount: Deactivated successfully. May 17 00:42:50.535401 systemd-networkd[1062]: vxlan.calico: Gained IPv6LL May 17 00:42:50.540646 env[1298]: time="2025-05-17T00:42:50.540573014Z" level=info msg="TearDown network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" successfully" May 17 00:42:50.540946 env[1298]: time="2025-05-17T00:42:50.540914651Z" level=info msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" returns successfully" May 17 00:42:50.542139 env[1298]: time="2025-05-17T00:42:50.542107752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-w9g2b,Uid:2195973a-24fa-48aa-9e37-bf651df76422,Namespace:calico-system,Attempt:1,}" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.449 [INFO][3796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.449 [INFO][3796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" iface="eth0" netns="/var/run/netns/cni-d98f32b1-65d5-aea3-408f-c0ad8b0f8404" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.450 [INFO][3796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" iface="eth0" netns="/var/run/netns/cni-d98f32b1-65d5-aea3-408f-c0ad8b0f8404" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.452 [INFO][3796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" iface="eth0" netns="/var/run/netns/cni-d98f32b1-65d5-aea3-408f-c0ad8b0f8404" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.452 [INFO][3796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.452 [INFO][3796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.508 [INFO][3810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.509 [INFO][3810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.519 [INFO][3810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.542 [WARNING][3810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.542 [INFO][3810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.545 [INFO][3810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:50.567793 env[1298]: 2025-05-17 00:42:50.563 [INFO][3796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:42:50.571437 systemd[1]: run-netns-cni\x2dd98f32b1\x2d65d5\x2daea3\x2d408f\x2dc0ad8b0f8404.mount: Deactivated successfully. May 17 00:42:50.575139 env[1298]: time="2025-05-17T00:42:50.575093677Z" level=info msg="TearDown network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" successfully" May 17 00:42:50.575277 env[1298]: time="2025-05-17T00:42:50.575170265Z" level=info msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" returns successfully" May 17 00:42:50.576199 env[1298]: time="2025-05-17T00:42:50.576144306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d946d6c9d-wf9nz,Uid:8def56ed-36b1-4f37-91fd-6252ab1906d1,Namespace:calico-system,Attempt:1,}" May 17 00:42:50.758680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:50.759042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif5633de16d9: link becomes ready May 17 00:42:50.760486 systemd-networkd[1062]: calif5633de16d9: Link UP May 17 00:42:50.760730 systemd-networkd[1062]: calif5633de16d9: Gained carrier May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.649 [INFO][3818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0 goldmane-8f77d7b6c- calico-system 2195973a-24fa-48aa-9e37-bf651df76422 939 0 2025-05-17 00:42:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 goldmane-8f77d7b6c-w9g2b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif5633de16d9 [] [] }} ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.649 [INFO][3818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.698 [INFO][3844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" HandleID="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.698 [INFO][3844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" HandleID="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000377720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"goldmane-8f77d7b6c-w9g2b", "timestamp":"2025-05-17 00:42:50.698684472 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.699 [INFO][3844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.699 [INFO][3844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.699 [INFO][3844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.706 [INFO][3844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.715 [INFO][3844] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.724 [INFO][3844] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.727 [INFO][3844] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.730 [INFO][3844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.730 [INFO][3844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.732 [INFO][3844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35 May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.737 [INFO][3844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.745 [INFO][3844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.3/26] block=192.168.99.0/26 handle="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.745 [INFO][3844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.3/26] handle="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.745 [INFO][3844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:50.796245 env[1298]: 2025-05-17 00:42:50.745 [INFO][3844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.3/26] IPv6=[] ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" HandleID="k8s-pod-network.cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.748 [INFO][3818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"2195973a-24fa-48aa-9e37-bf651df76422", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"goldmane-8f77d7b6c-w9g2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif5633de16d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.749 [INFO][3818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.3/32] ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.749 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5633de16d9 ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.755 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.758 [INFO][3818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"2195973a-24fa-48aa-9e37-bf651df76422", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35", Pod:"goldmane-8f77d7b6c-w9g2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif5633de16d9", MAC:"3e:6e:53:9c:de:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:50.797049 env[1298]: 2025-05-17 00:42:50.777 [INFO][3818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35" Namespace="calico-system" Pod="goldmane-8f77d7b6c-w9g2b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:42:50.797000 audit[3865]: NETFILTER_CFG table=filter:108 family=2 entries=48 op=nft_register_chain pid=3865 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:50.797000 audit[3865]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7fff10890c60 a2=0 a3=7fff10890c4c items=0 ppid=3490 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:50.797000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:50.821193 env[1298]: time="2025-05-17T00:42:50.821099445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:50.821408 env[1298]: time="2025-05-17T00:42:50.821206635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:50.821408 env[1298]: time="2025-05-17T00:42:50.821234315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:50.821526 env[1298]: time="2025-05-17T00:42:50.821486468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35 pid=3878 runtime=io.containerd.runc.v2 May 17 00:42:50.851939 systemd-networkd[1062]: calic50c88da89b: Gained IPv6LL May 17 00:42:50.865104 systemd-networkd[1062]: caliccf61d4ea9b: Link UP May 17 00:42:50.867631 systemd-networkd[1062]: caliccf61d4ea9b: Gained carrier May 17 00:42:50.868391 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliccf61d4ea9b: link becomes ready May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.667 [INFO][3831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0 calico-kube-controllers-7d946d6c9d- calico-system 8def56ed-36b1-4f37-91fd-6252ab1906d1 940 0 2025-05-17 00:42:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d946d6c9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 calico-kube-controllers-7d946d6c9d-wf9nz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliccf61d4ea9b [] [] }} ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.667 [INFO][3831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.745 [INFO][3852] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" HandleID="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.746 [INFO][3852] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" HandleID="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"calico-kube-controllers-7d946d6c9d-wf9nz", "timestamp":"2025-05-17 00:42:50.745552339 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.746 [INFO][3852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.746 [INFO][3852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.746 [INFO][3852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.808 [INFO][3852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.816 [INFO][3852] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.824 [INFO][3852] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.827 [INFO][3852] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.832 [INFO][3852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.833 [INFO][3852] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.840 [INFO][3852] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280 May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.846 [INFO][3852] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.858 [INFO][3852] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.4/26] block=192.168.99.0/26 handle="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.858 [INFO][3852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.4/26] handle="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.858 [INFO][3852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:50.897115 env[1298]: 2025-05-17 00:42:50.858 [INFO][3852] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.4/26] IPv6=[] ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" HandleID="k8s-pod-network.3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.861 [INFO][3831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0", GenerateName:"calico-kube-controllers-7d946d6c9d-", Namespace:"calico-system", SelfLink:"", UID:"8def56ed-36b1-4f37-91fd-6252ab1906d1", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d946d6c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"calico-kube-controllers-7d946d6c9d-wf9nz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccf61d4ea9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.861 [INFO][3831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.4/32] ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.861 [INFO][3831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccf61d4ea9b ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.877 [INFO][3831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.881 [INFO][3831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0", GenerateName:"calico-kube-controllers-7d946d6c9d-", Namespace:"calico-system", SelfLink:"", UID:"8def56ed-36b1-4f37-91fd-6252ab1906d1", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d946d6c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280", Pod:"calico-kube-controllers-7d946d6c9d-wf9nz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccf61d4ea9b", MAC:"ce:1e:b9:29:2e:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:50.897907 env[1298]: 2025-05-17 00:42:50.894 [INFO][3831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280" Namespace="calico-system" Pod="calico-kube-controllers-7d946d6c9d-wf9nz" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:42:50.924427 env[1298]: time="2025-05-17T00:42:50.924350365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:50.924956 env[1298]: time="2025-05-17T00:42:50.924921539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:50.925159 env[1298]: time="2025-05-17T00:42:50.925131819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:50.927620 env[1298]: time="2025-05-17T00:42:50.927570290Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280 pid=3915 runtime=io.containerd.runc.v2 May 17 00:42:50.938000 audit[3927]: NETFILTER_CFG table=filter:109 family=2 entries=50 op=nft_register_chain pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:50.938000 audit[3927]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffe58d54cf0 a2=0 a3=7ffe58d54cdc items=0 ppid=3490 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:50.938000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:50.961584 env[1298]: time="2025-05-17T00:42:50.961440577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-w9g2b,Uid:2195973a-24fa-48aa-9e37-bf651df76422,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35\"" May 17 00:42:51.016695 env[1298]: time="2025-05-17T00:42:51.016515240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d946d6c9d-wf9nz,Uid:8def56ed-36b1-4f37-91fd-6252ab1906d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280\"" May 17 00:42:52.355912 env[1298]: time="2025-05-17T00:42:52.355745571Z" level=info msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.496 [INFO][3968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.496 [INFO][3968] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" iface="eth0" netns="/var/run/netns/cni-eeedbf7e-3782-79d8-7d71-0d36522242ff" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.496 [INFO][3968] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" iface="eth0" netns="/var/run/netns/cni-eeedbf7e-3782-79d8-7d71-0d36522242ff" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.497 [INFO][3968] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" iface="eth0" netns="/var/run/netns/cni-eeedbf7e-3782-79d8-7d71-0d36522242ff" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.497 [INFO][3968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.497 [INFO][3968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.605 [INFO][3975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.608 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.608 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.620 [WARNING][3975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.620 [INFO][3975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.623 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:52.630530 env[1298]: 2025-05-17 00:42:52.626 [INFO][3968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:42:52.637782 systemd[1]: run-netns-cni\x2deeedbf7e\x2d3782\x2d79d8\x2d7d71\x2d0d36522242ff.mount: Deactivated successfully. May 17 00:42:52.639067 env[1298]: time="2025-05-17T00:42:52.638965997Z" level=info msg="TearDown network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" successfully" May 17 00:42:52.639220 env[1298]: time="2025-05-17T00:42:52.639063220Z" level=info msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" returns successfully" May 17 00:42:52.643443 kubelet[2089]: E0517 00:42:52.643066 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:52.644508 systemd-networkd[1062]: calif5633de16d9: Gained IPv6LL May 17 00:42:52.661497 env[1298]: time="2025-05-17T00:42:52.661413875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfghl,Uid:1522b219-3da2-4360-a455-7590bb24be2f,Namespace:kube-system,Attempt:1,}" May 17 00:42:52.700256 env[1298]: time="2025-05-17T00:42:52.700207443Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.714547 env[1298]: time="2025-05-17T00:42:52.714474428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.717909 env[1298]: time="2025-05-17T00:42:52.717754102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.720037 env[1298]: time="2025-05-17T00:42:52.719269648Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:52.720251 env[1298]: time="2025-05-17T00:42:52.719969599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:42:52.725602 env[1298]: time="2025-05-17T00:42:52.725555199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:42:52.727741 env[1298]: time="2025-05-17T00:42:52.727685474Z" level=info msg="CreateContainer within sandbox \"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:42:52.749737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996229052.mount: Deactivated successfully. May 17 00:42:52.754321 env[1298]: time="2025-05-17T00:42:52.754237191Z" level=info msg="CreateContainer within sandbox \"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f6775fc47db894ec849b746ff045ac23f2413f1d38ab4d3e3a6d5a0ca5f6a3a\"" May 17 00:42:52.756696 env[1298]: time="2025-05-17T00:42:52.756654078Z" level=info msg="StartContainer for \"4f6775fc47db894ec849b746ff045ac23f2413f1d38ab4d3e3a6d5a0ca5f6a3a\"" May 17 00:42:52.840514 systemd-networkd[1062]: caliccf61d4ea9b: Gained IPv6LL May 17 00:42:52.921171 env[1298]: time="2025-05-17T00:42:52.921091255Z" level=info msg="StartContainer for \"4f6775fc47db894ec849b746ff045ac23f2413f1d38ab4d3e3a6d5a0ca5f6a3a\" returns successfully" May 17 00:42:52.931738 env[1298]: time="2025-05-17T00:42:52.931672686Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:42:52.947397 env[1298]: time="2025-05-17T00:42:52.947285114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:42:52.949278 kubelet[2089]: E0517 00:42:52.948104 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:42:52.949278 kubelet[2089]: E0517 00:42:52.948199 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:42:52.949278 kubelet[2089]: E0517 00:42:52.948551 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2chb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:42:52.951883 kubelet[2089]: E0517 00:42:52.951788 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:42:52.952700 env[1298]: time="2025-05-17T00:42:52.952638783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:42:53.020515 systemd-networkd[1062]: cali62a066bbf89: Link UP May 17 00:42:53.025338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:53.025547 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali62a066bbf89: link becomes ready May 17 00:42:53.025453 systemd-networkd[1062]: cali62a066bbf89: Gained carrier May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.801 [INFO][3981] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0 coredns-7c65d6cfc9- kube-system 1522b219-3da2-4360-a455-7590bb24be2f 955 0 2025-05-17 00:42:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 coredns-7c65d6cfc9-cfghl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62a066bbf89 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.802 [INFO][3981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.889 [INFO][4019] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" HandleID="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.890 [INFO][4019] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" HandleID="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"coredns-7c65d6cfc9-cfghl", "timestamp":"2025-05-17 00:42:52.889772724 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.890 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.890 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.890 [INFO][4019] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.904 [INFO][4019] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.918 [INFO][4019] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.942 [INFO][4019] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.959 [INFO][4019] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.973 [INFO][4019] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.974 [INFO][4019] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.978 [INFO][4019] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634 May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:52.989 [INFO][4019] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:53.001 [INFO][4019] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.5/26] block=192.168.99.0/26 handle="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:53.001 [INFO][4019] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.5/26] handle="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:53.001 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:53.058391 env[1298]: 2025-05-17 00:42:53.001 [INFO][4019] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.5/26] IPv6=[] ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" HandleID="k8s-pod-network.4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.008 [INFO][3981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1522b219-3da2-4360-a455-7590bb24be2f", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"coredns-7c65d6cfc9-cfghl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62a066bbf89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.009 [INFO][3981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.5/32] ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.009 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62a066bbf89 ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.027 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.027 [INFO][3981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1522b219-3da2-4360-a455-7590bb24be2f", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634", Pod:"coredns-7c65d6cfc9-cfghl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62a066bbf89", MAC:"ce:4b:e1:9e:3c:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:53.059676 env[1298]: 2025-05-17 00:42:53.048 [INFO][3981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634" Namespace="kube-system" Pod="coredns-7c65d6cfc9-cfghl" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:42:53.090754 env[1298]: time="2025-05-17T00:42:53.090649672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:53.091099 env[1298]: time="2025-05-17T00:42:53.091049799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:53.091287 env[1298]: time="2025-05-17T00:42:53.091251043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:53.092152 env[1298]: time="2025-05-17T00:42:53.092096069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634 pid=4049 runtime=io.containerd.runc.v2 May 17 00:42:53.154000 audit[4082]: NETFILTER_CFG table=filter:110 family=2 entries=56 op=nft_register_chain pid=4082 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:53.157680 kernel: kauditd_printk_skb: 520 callbacks suppressed May 17 00:42:53.157806 kernel: audit: type=1325 audit(1747442573.154:413): table=filter:110 family=2 entries=56 op=nft_register_chain pid=4082 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:53.154000 audit[4082]: SYSCALL arch=c000003e syscall=46 success=yes exit=27764 a0=3 a1=7ffddcf45da0 a2=0 a3=7ffddcf45d8c items=0 ppid=3490 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.169338 kernel: audit: type=1300 audit(1747442573.154:413): arch=c000003e syscall=46 success=yes exit=27764 a0=3 a1=7ffddcf45da0 a2=0 a3=7ffddcf45d8c items=0 ppid=3490 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:53.175366 kernel: audit: type=1327 audit(1747442573.154:413): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:53.209950 env[1298]: time="2025-05-17T00:42:53.209891671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfghl,Uid:1522b219-3da2-4360-a455-7590bb24be2f,Namespace:kube-system,Attempt:1,} returns sandbox id \"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634\"" May 17 00:42:53.214624 kubelet[2089]: E0517 00:42:53.214581 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:53.221584 env[1298]: time="2025-05-17T00:42:53.221522168Z" level=info msg="CreateContainer within sandbox \"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:53.243878 env[1298]: time="2025-05-17T00:42:53.243800654Z" level=info msg="CreateContainer within sandbox \"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d19fd3c52cb81590df1c53bcbd03229898c5ec5c6fc3f4ea4ca68e6efd27f1e4\"" May 17 00:42:53.247974 env[1298]: time="2025-05-17T00:42:53.247925241Z" level=info msg="StartContainer for \"d19fd3c52cb81590df1c53bcbd03229898c5ec5c6fc3f4ea4ca68e6efd27f1e4\"" May 17 00:42:53.330966 env[1298]: time="2025-05-17T00:42:53.330901476Z" level=info msg="StartContainer for \"d19fd3c52cb81590df1c53bcbd03229898c5ec5c6fc3f4ea4ca68e6efd27f1e4\" returns successfully" May 17 00:42:53.646841 kubelet[2089]: E0517 00:42:53.646805 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:53.684869 kubelet[2089]: E0517 00:42:53.684823 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:42:53.706660 kubelet[2089]: I0517 00:42:53.705325 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cfghl" podStartSLOduration=40.705258489 podStartE2EDuration="40.705258489s" podCreationTimestamp="2025-05-17 00:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:53.675969992 +0000 UTC m=+44.739784847" watchObservedRunningTime="2025-05-17 00:42:53.705258489 +0000 UTC m=+44.769073330" May 17 00:42:53.708000 audit[4131]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=4131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.713315 kernel: audit: type=1325 audit(1747442573.708:414): table=filter:111 family=2 entries=20 op=nft_register_rule pid=4131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.708000 audit[4131]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff623d7220 a2=0 a3=7fff623d720c items=0 ppid=2190 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.718322 kernel: audit: type=1300 audit(1747442573.708:414): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff623d7220 a2=0 a3=7fff623d720c items=0 ppid=2190 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:53.723331 kernel: audit: type=1327 audit(1747442573.708:414): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:53.722000 audit[4131]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.726506 kernel: audit: type=1325 audit(1747442573.722:415): table=nat:112 family=2 entries=14 op=nft_register_rule pid=4131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.731461 kubelet[2089]: I0517 00:42:53.731398 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c44f8777-nv4xw" podStartSLOduration=26.843238424 podStartE2EDuration="29.731372251s" podCreationTimestamp="2025-05-17 00:42:24 +0000 UTC" firstStartedPulling="2025-05-17 00:42:49.833843763 +0000 UTC m=+40.897658595" lastFinishedPulling="2025-05-17 00:42:52.721977602 +0000 UTC m=+43.785792422" observedRunningTime="2025-05-17 00:42:53.730663913 +0000 UTC m=+44.794478770" watchObservedRunningTime="2025-05-17 00:42:53.731372251 +0000 UTC m=+44.795187095" May 17 00:42:53.722000 audit[4131]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff623d7220 a2=0 a3=0 items=0 ppid=2190 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.742617 kernel: audit: type=1300 audit(1747442573.722:415): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff623d7220 a2=0 a3=0 items=0 ppid=2190 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.742794 kernel: audit: type=1327 audit(1747442573.722:415): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:53.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:53.755000 audit[4133]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.763436 kernel: audit: type=1325 audit(1747442573.755:416): table=filter:113 family=2 entries=20 op=nft_register_rule pid=4133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.755000 audit[4133]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd43391ff0 a2=0 a3=7ffd43391fdc items=0 ppid=2190 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:53.766000 audit[4133]: NETFILTER_CFG table=nat:114 family=2 entries=14 op=nft_register_rule pid=4133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:53.766000 audit[4133]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd43391ff0 a2=0 a3=0 items=0 ppid=2190 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:53.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:54.352666 env[1298]: time="2025-05-17T00:42:54.352608081Z" level=info msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" May 17 00:42:54.354337 env[1298]: time="2025-05-17T00:42:54.352830819Z" level=info msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" May 17 00:42:54.701779 kubelet[2089]: E0517 00:42:54.701340 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:54.720314 kubelet[2089]: I0517 00:42:54.720242 2089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.567 [INFO][4160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.567 [INFO][4160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" iface="eth0" netns="/var/run/netns/cni-139b9ac3-1e17-0a60-04bd-d8b979d50121" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.567 [INFO][4160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" iface="eth0" netns="/var/run/netns/cni-139b9ac3-1e17-0a60-04bd-d8b979d50121" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.568 [INFO][4160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" iface="eth0" netns="/var/run/netns/cni-139b9ac3-1e17-0a60-04bd-d8b979d50121" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.568 [INFO][4160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.568 [INFO][4160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.762 [INFO][4174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.762 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.762 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.791 [WARNING][4174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.791 [INFO][4174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.794 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:54.803880 env[1298]: 2025-05-17 00:42:54.799 [INFO][4160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:42:54.807948 systemd[1]: run-netns-cni\x2d139b9ac3\x2d1e17\x2d0a60\x2d04bd\x2dd8b979d50121.mount: Deactivated successfully. May 17 00:42:54.813176 env[1298]: time="2025-05-17T00:42:54.813115788Z" level=info msg="TearDown network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" successfully" May 17 00:42:54.815187 env[1298]: time="2025-05-17T00:42:54.815143816Z" level=info msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" returns successfully" May 17 00:42:54.816980 env[1298]: time="2025-05-17T00:42:54.816926114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrqvm,Uid:cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb,Namespace:calico-system,Attempt:1,}" May 17 00:42:54.824496 systemd-networkd[1062]: cali62a066bbf89: Gained IPv6LL May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.573 [INFO][4156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.573 [INFO][4156] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" iface="eth0" netns="/var/run/netns/cni-318a7c19-c2a1-b2d0-9fae-b586eacae5b4" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.578 [INFO][4156] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" iface="eth0" netns="/var/run/netns/cni-318a7c19-c2a1-b2d0-9fae-b586eacae5b4" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.579 [INFO][4156] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" iface="eth0" netns="/var/run/netns/cni-318a7c19-c2a1-b2d0-9fae-b586eacae5b4" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.579 [INFO][4156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.579 [INFO][4156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.773 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.774 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.795 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.812 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.812 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.816 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:54.849443 env[1298]: 2025-05-17 00:42:54.844 [INFO][4156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:42:54.855228 systemd[1]: run-netns-cni\x2d318a7c19\x2dc2a1\x2db2d0\x2d9fae\x2db586eacae5b4.mount: Deactivated successfully. May 17 00:42:54.861548 env[1298]: time="2025-05-17T00:42:54.861478360Z" level=info msg="TearDown network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" successfully" May 17 00:42:54.861841 env[1298]: time="2025-05-17T00:42:54.861792657Z" level=info msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" returns successfully" May 17 00:42:54.862970 env[1298]: time="2025-05-17T00:42:54.862919595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-hd5ct,Uid:72820e23-e5e9-4de9-a5fc-0c6f661e245d,Namespace:calico-apiserver,Attempt:1,}" May 17 00:42:54.868000 audit[4187]: NETFILTER_CFG table=filter:115 family=2 entries=17 op=nft_register_rule pid=4187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:54.868000 audit[4187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefc0856f0 a2=0 a3=7ffefc0856dc items=0 ppid=2190 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:54.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:54.879000 audit[4187]: NETFILTER_CFG table=nat:116 family=2 entries=35 op=nft_register_chain pid=4187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:54.879000 audit[4187]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffefc0856f0 a2=0 a3=7ffefc0856dc items=0 ppid=2190 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:54.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:55.357976 env[1298]: time="2025-05-17T00:42:55.357920588Z" level=info msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" May 17 00:42:55.547629 systemd-networkd[1062]: calidca22495beb: Link UP May 17 00:42:55.553757 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:55.553945 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidca22495beb: link becomes ready May 17 00:42:55.554158 systemd-networkd[1062]: calidca22495beb: Gained carrier May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.137 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0 csi-node-driver- calico-system cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb 991 0 2025-05-17 00:42:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 csi-node-driver-xrqvm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidca22495beb [] [] }} ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.138 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.367 [INFO][4218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" HandleID="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.367 [INFO][4218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" HandleID="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"csi-node-driver-xrqvm", "timestamp":"2025-05-17 00:42:55.367125357 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.367 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.367 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.367 [INFO][4218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.405 [INFO][4218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.428 [INFO][4218] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.445 [INFO][4218] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.449 [INFO][4218] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.458 [INFO][4218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.458 [INFO][4218] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.464 [INFO][4218] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.479 [INFO][4218] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.509 [INFO][4218] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.6/26] block=192.168.99.0/26 handle="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.509 [INFO][4218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.6/26] handle="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.509 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:55.670665 env[1298]: 2025-05-17 00:42:55.510 [INFO][4218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.6/26] IPv6=[] ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" HandleID="k8s-pod-network.c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.532 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"csi-node-driver-xrqvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidca22495beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.532 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.6/32] ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.532 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidca22495beb ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.572 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.572 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b", Pod:"csi-node-driver-xrqvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidca22495beb", MAC:"5a:41:e0:d4:f4:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:55.671560 env[1298]: 2025-05-17 00:42:55.626 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b" Namespace="calico-system" Pod="csi-node-driver-xrqvm" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:42:55.709076 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali02943f259b8: link becomes ready May 17 00:42:55.708555 systemd-networkd[1062]: cali02943f259b8: Link UP May 17 00:42:55.708896 systemd-networkd[1062]: cali02943f259b8: Gained carrier May 17 00:42:55.716000 audit[4249]: NETFILTER_CFG table=filter:117 family=2 entries=44 op=nft_register_chain pid=4249 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:55.716000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=21920 a0=3 a1=7ffc7c2fe350 a2=0 a3=7ffc7c2fe33c items=0 ppid=3490 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:55.716000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:55.740623 kubelet[2089]: E0517 00:42:55.734497 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.140 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0 calico-apiserver-7c44f8777- calico-apiserver 72820e23-e5e9-4de9-a5fc-0c6f661e245d 992 0 2025-05-17 00:42:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c44f8777 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 calico-apiserver-7c44f8777-hd5ct eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02943f259b8 [] [] }} ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.140 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.438 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" HandleID="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.440 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" HandleID="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aa80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"calico-apiserver-7c44f8777-hd5ct", "timestamp":"2025-05-17 00:42:55.438915333 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.440 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.511 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.511 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.557 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.596 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.631 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.644 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.650 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.650 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.655 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.663 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.675 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.7/26] block=192.168.99.0/26 handle="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.675 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.7/26] handle="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.675 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:55.803487 env[1298]: 2025-05-17 00:42:55.675 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.7/26] IPv6=[] ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" HandleID="k8s-pod-network.17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.681 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"72820e23-e5e9-4de9-a5fc-0c6f661e245d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"calico-apiserver-7c44f8777-hd5ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02943f259b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.681 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.7/32] ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.681 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02943f259b8 ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.715 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.724 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"72820e23-e5e9-4de9-a5fc-0c6f661e245d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb", Pod:"calico-apiserver-7c44f8777-hd5ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02943f259b8", MAC:"9a:70:d6:64:04:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:55.804584 env[1298]: 2025-05-17 00:42:55.791 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb" Namespace="calico-apiserver" Pod="calico-apiserver-7c44f8777-hd5ct" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:42:55.841777 env[1298]: time="2025-05-17T00:42:55.828527834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:55.841777 env[1298]: time="2025-05-17T00:42:55.828575966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:55.841777 env[1298]: time="2025-05-17T00:42:55.828586688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:55.841777 env[1298]: time="2025-05-17T00:42:55.828716885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b pid=4268 runtime=io.containerd.runc.v2 May 17 00:42:55.893883 env[1298]: time="2025-05-17T00:42:55.893756804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:55.894191 env[1298]: time="2025-05-17T00:42:55.894125436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:55.894464 env[1298]: time="2025-05-17T00:42:55.894419304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:55.894948 env[1298]: time="2025-05-17T00:42:55.894898948Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb pid=4287 runtime=io.containerd.runc.v2 May 17 00:42:55.974966 systemd[1]: run-containerd-runc-k8s.io-c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b-runc.28HK3f.mount: Deactivated successfully. May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.857 [INFO][4238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.857 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" iface="eth0" netns="/var/run/netns/cni-b39a9079-b2e6-95c0-001f-4393154f7a5d" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.857 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" iface="eth0" netns="/var/run/netns/cni-b39a9079-b2e6-95c0-001f-4393154f7a5d" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.858 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" iface="eth0" netns="/var/run/netns/cni-b39a9079-b2e6-95c0-001f-4393154f7a5d" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.858 [INFO][4238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.858 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.997 [INFO][4290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.998 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:55.998 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:56.054 [WARNING][4290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:56.054 [INFO][4290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:56.059 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:56.066904 env[1298]: 2025-05-17 00:42:56.062 [INFO][4238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:42:56.080500 env[1298]: time="2025-05-17T00:42:56.080425220Z" level=info msg="TearDown network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" successfully" May 17 00:42:56.080794 env[1298]: time="2025-05-17T00:42:56.080768957Z" level=info msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" returns successfully" May 17 00:42:56.083466 kubelet[2089]: E0517 00:42:56.082745 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:56.083930 env[1298]: time="2025-05-17T00:42:56.083881679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rghhh,Uid:9b0969e4-be19-4aa6-ac52-0eb151d2ba1f,Namespace:kube-system,Attempt:1,}" May 17 00:42:56.255472 env[1298]: time="2025-05-17T00:42:56.255428515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c44f8777-hd5ct,Uid:72820e23-e5e9-4de9-a5fc-0c6f661e245d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb\"" May 17 00:42:56.263626 env[1298]: time="2025-05-17T00:42:56.263566075Z" level=info msg="CreateContainer within sandbox \"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:42:56.280000 audit[4357]: NETFILTER_CFG table=filter:118 family=2 entries=49 op=nft_register_chain pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:56.280000 audit[4357]: SYSCALL arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7ffd613cfed0 a2=0 a3=7ffd613cfebc items=0 ppid=3490 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:56.280000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:56.349277 env[1298]: time="2025-05-17T00:42:56.349225572Z" level=info msg="CreateContainer within sandbox \"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b939727be22ef1c6cab42f531623ed9273082714fb7a30dcaf2c70e49564b65\"" May 17 00:42:56.362488 env[1298]: time="2025-05-17T00:42:56.362433354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xrqvm,Uid:cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb,Namespace:calico-system,Attempt:1,} returns sandbox id \"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b\"" May 17 00:42:56.363214 env[1298]: time="2025-05-17T00:42:56.363183513Z" level=info msg="StartContainer for \"2b939727be22ef1c6cab42f531623ed9273082714fb7a30dcaf2c70e49564b65\"" May 17 00:42:56.610877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:42:56.611039 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4c477a7f572: link becomes ready May 17 00:42:56.611591 systemd-networkd[1062]: cali4c477a7f572: Link UP May 17 00:42:56.611859 systemd-networkd[1062]: cali4c477a7f572: Gained carrier May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.263 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0 coredns-7c65d6cfc9- kube-system 9b0969e4-be19-4aa6-ac52-0eb151d2ba1f 1006 0 2025-05-17 00:42:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-9c3fefbd06 coredns-7c65d6cfc9-rghhh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c477a7f572 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.264 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.448 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" HandleID="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.449 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" HandleID="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-9c3fefbd06", "pod":"coredns-7c65d6cfc9-rghhh", "timestamp":"2025-05-17 00:42:56.448685166 +0000 UTC"}, Hostname:"ci-3510.3.7-n-9c3fefbd06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.449 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.449 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.449 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-9c3fefbd06' May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.516 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.525 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.553 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.560 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.564 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.564 [INFO][4366] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.567 [INFO][4366] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.574 [INFO][4366] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.587 [INFO][4366] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.8/26] block=192.168.99.0/26 handle="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.588 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.8/26] handle="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" host="ci-3510.3.7-n-9c3fefbd06" May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.588 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:42:56.647341 env[1298]: 2025-05-17 00:42:56.588 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.8/26] IPv6=[] ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" HandleID="k8s-pod-network.3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.591 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"", Pod:"coredns-7c65d6cfc9-rghhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c477a7f572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.592 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.8/32] ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.592 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c477a7f572 ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.619 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.623 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab", Pod:"coredns-7c65d6cfc9-rghhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c477a7f572", MAC:"fa:d4:34:c7:a1:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:42:56.652431 env[1298]: 2025-05-17 00:42:56.639 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rghhh" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:42:56.679570 env[1298]: time="2025-05-17T00:42:56.679517095Z" level=info msg="StartContainer for \"2b939727be22ef1c6cab42f531623ed9273082714fb7a30dcaf2c70e49564b65\" returns successfully" May 17 00:42:56.705920 env[1298]: time="2025-05-17T00:42:56.705845264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:56.706281 env[1298]: time="2025-05-17T00:42:56.706129495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:56.706281 env[1298]: time="2025-05-17T00:42:56.706165867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:56.706614 env[1298]: time="2025-05-17T00:42:56.706568295Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab pid=4424 runtime=io.containerd.runc.v2 May 17 00:42:56.708000 audit[4413]: NETFILTER_CFG table=filter:119 family=2 entries=48 op=nft_register_chain pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:42:56.708000 audit[4413]: SYSCALL arch=c000003e syscall=46 success=yes exit=22688 a0=3 a1=7fff63d1adb0 a2=0 a3=7fff63d1ad9c items=0 ppid=3490 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:56.708000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:42:56.746407 systemd-networkd[1062]: calidca22495beb: Gained IPv6LL May 17 00:42:56.856467 systemd[1]: run-netns-cni\x2db39a9079\x2db2e6\x2d95c0\x2d001f\x2d4393154f7a5d.mount: Deactivated successfully. May 17 00:42:56.886000 audit[4465]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:56.886000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe2acaab0 a2=0 a3=7fffe2acaa9c items=0 ppid=2190 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:56.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:56.891000 audit[4465]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:56.891000 audit[4465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffe2acaab0 a2=0 a3=7fffe2acaa9c items=0 ppid=2190 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:56.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:56.892394 env[1298]: time="2025-05-17T00:42:56.892351534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rghhh,Uid:9b0969e4-be19-4aa6-ac52-0eb151d2ba1f,Namespace:kube-system,Attempt:1,} returns sandbox id \"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab\"" May 17 00:42:56.893834 kubelet[2089]: E0517 00:42:56.893618 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:56.903214 env[1298]: time="2025-05-17T00:42:56.903157619Z" level=info msg="CreateContainer within sandbox \"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:56.958346 env[1298]: time="2025-05-17T00:42:56.942432182Z" level=info msg="CreateContainer within sandbox \"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ced37de8aa20e90eb92e861f2ac72190da5e65b24b6fcd4c7b32bd2684937417\"" May 17 00:42:56.959322 env[1298]: time="2025-05-17T00:42:56.959257385Z" level=info msg="StartContainer for \"ced37de8aa20e90eb92e861f2ac72190da5e65b24b6fcd4c7b32bd2684937417\"" May 17 00:42:57.059477 systemd-networkd[1062]: cali02943f259b8: Gained IPv6LL May 17 00:42:57.180322 env[1298]: time="2025-05-17T00:42:57.180172510Z" level=info msg="StartContainer for \"ced37de8aa20e90eb92e861f2ac72190da5e65b24b6fcd4c7b32bd2684937417\" returns successfully" May 17 00:42:57.777194 kubelet[2089]: I0517 00:42:57.773798 2089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:42:57.781051 kubelet[2089]: E0517 00:42:57.779384 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:57.783465 env[1298]: time="2025-05-17T00:42:57.783419036Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.796146 env[1298]: time="2025-05-17T00:42:57.796102765Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.804330 kubelet[2089]: I0517 00:42:57.801895 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c44f8777-hd5ct" podStartSLOduration=33.800705331 podStartE2EDuration="33.800705331s" podCreationTimestamp="2025-05-17 00:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:56.807281858 +0000 UTC m=+47.871096698" watchObservedRunningTime="2025-05-17 00:42:57.800705331 +0000 UTC m=+48.864520175" May 17 00:42:57.805371 kubelet[2089]: I0517 00:42:57.805318 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rghhh" podStartSLOduration=44.805286483 podStartE2EDuration="44.805286483s" podCreationTimestamp="2025-05-17 00:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:57.804958114 +0000 UTC m=+48.868772946" watchObservedRunningTime="2025-05-17 00:42:57.805286483 +0000 UTC m=+48.869101324" May 17 00:42:57.805831 env[1298]: time="2025-05-17T00:42:57.805792552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.813953 env[1298]: time="2025-05-17T00:42:57.813905436Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:57.815064 env[1298]: time="2025-05-17T00:42:57.815019330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:42:57.816819 env[1298]: time="2025-05-17T00:42:57.816789065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:42:57.856059 env[1298]: time="2025-05-17T00:42:57.855870189Z" level=info msg="CreateContainer within sandbox \"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:42:57.877602 env[1298]: time="2025-05-17T00:42:57.877543036Z" level=info msg="CreateContainer within sandbox \"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84\"" May 17 00:42:57.878536 env[1298]: time="2025-05-17T00:42:57.878504703Z" level=info msg="StartContainer for \"8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84\"" May 17 00:42:57.907000 audit[4511]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.907000 audit[4511]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3de2c600 a2=0 a3=7ffd3de2c5ec items=0 ppid=2190 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.913121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258052437.mount: Deactivated successfully. May 17 00:42:57.935000 audit[4511]: NETFILTER_CFG table=nat:123 family=2 entries=44 op=nft_register_rule pid=4511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.935000 audit[4511]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd3de2c600 a2=0 a3=7ffd3de2c5ec items=0 ppid=2190 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:58.001000 audit[4529]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:58.001000 audit[4529]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc6351390 a2=0 a3=7ffdc635137c items=0 ppid=2190 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:58.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:58.027000 audit[4529]: NETFILTER_CFG table=nat:125 family=2 entries=56 op=nft_register_chain pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:58.027000 audit[4529]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdc6351390 a2=0 a3=7ffdc635137c items=0 ppid=2190 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:58.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:58.077137 env[1298]: time="2025-05-17T00:42:58.074434735Z" level=info msg="StartContainer for \"8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84\" returns successfully" May 17 00:42:58.595531 systemd-networkd[1062]: cali4c477a7f572: Gained IPv6LL May 17 00:42:58.801941 kubelet[2089]: E0517 00:42:58.801867 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:58.840359 kubelet[2089]: I0517 00:42:58.840244 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d946d6c9d-wf9nz" podStartSLOduration=24.044219862 podStartE2EDuration="30.840220466s" podCreationTimestamp="2025-05-17 00:42:28 +0000 UTC" firstStartedPulling="2025-05-17 00:42:51.020560196 +0000 UTC m=+42.084375033" lastFinishedPulling="2025-05-17 00:42:57.816560817 +0000 UTC m=+48.880375637" observedRunningTime="2025-05-17 00:42:58.839767119 +0000 UTC m=+49.903581960" watchObservedRunningTime="2025-05-17 00:42:58.840220466 +0000 UTC m=+49.904035307" May 17 00:42:58.855443 systemd[1]: run-containerd-runc-k8s.io-8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84-runc.aI0pLq.mount: Deactivated successfully. May 17 00:42:59.050000 audit[4558]: NETFILTER_CFG table=filter:126 family=2 entries=13 op=nft_register_rule pid=4558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:59.052801 kernel: kauditd_printk_skb: 38 callbacks suppressed May 17 00:42:59.053237 kernel: audit: type=1325 audit(1747442579.050:429): table=filter:126 family=2 entries=13 op=nft_register_rule pid=4558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:59.050000 audit[4558]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd965a13d0 a2=0 a3=7ffd965a13bc items=0 ppid=2190 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:59.058004 kernel: audit: type=1300 audit(1747442579.050:429): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd965a13d0 a2=0 a3=7ffd965a13bc items=0 ppid=2190 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:59.058141 kernel: audit: type=1327 audit(1747442579.050:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:59.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:59.060000 audit[4558]: NETFILTER_CFG table=nat:127 family=2 entries=27 op=nft_register_chain pid=4558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:59.067102 kernel: audit: type=1325 audit(1747442579.060:430): table=nat:127 family=2 entries=27 op=nft_register_chain pid=4558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:59.060000 audit[4558]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd965a13d0 a2=0 a3=7ffd965a13bc items=0 ppid=2190 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:59.072947 kernel: audit: type=1300 audit(1747442579.060:430): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd965a13d0 a2=0 a3=7ffd965a13bc items=0 ppid=2190 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:59.073082 kernel: audit: type=1327 audit(1747442579.060:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:59.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:59.226564 env[1298]: time="2025-05-17T00:42:59.226507122Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:59.228166 env[1298]: time="2025-05-17T00:42:59.228130869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:59.229859 env[1298]: time="2025-05-17T00:42:59.229824287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:59.231679 env[1298]: time="2025-05-17T00:42:59.231639807Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:59.232187 env[1298]: time="2025-05-17T00:42:59.232143477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:42:59.243664 env[1298]: time="2025-05-17T00:42:59.243560090Z" level=info msg="CreateContainer within sandbox \"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:42:59.260150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741335623.mount: Deactivated successfully. May 17 00:42:59.265271 env[1298]: time="2025-05-17T00:42:59.265186073Z" level=info msg="CreateContainer within sandbox \"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f9dcce50a45c2a33f7e60d0e58e2eb397e1d20b58d21a2bd18e0c5d4f5f42e9d\"" May 17 00:42:59.268535 env[1298]: time="2025-05-17T00:42:59.268485100Z" level=info msg="StartContainer for \"f9dcce50a45c2a33f7e60d0e58e2eb397e1d20b58d21a2bd18e0c5d4f5f42e9d\"" May 17 00:42:59.377967 env[1298]: time="2025-05-17T00:42:59.377922100Z" level=info msg="StartContainer for \"f9dcce50a45c2a33f7e60d0e58e2eb397e1d20b58d21a2bd18e0c5d4f5f42e9d\" returns successfully" May 17 00:42:59.387832 env[1298]: time="2025-05-17T00:42:59.387783961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:42:59.807600 kubelet[2089]: E0517 00:42:59.807514 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:42:59.808794 kubelet[2089]: I0517 00:42:59.808761 2089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:42:59.853616 systemd[1]: run-containerd-runc-k8s.io-f9dcce50a45c2a33f7e60d0e58e2eb397e1d20b58d21a2bd18e0c5d4f5f42e9d-runc.36aeU1.mount: Deactivated successfully. May 17 00:43:00.923526 env[1298]: time="2025-05-17T00:43:00.923466878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:00.925474 env[1298]: time="2025-05-17T00:43:00.925429006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:00.927639 env[1298]: time="2025-05-17T00:43:00.927596112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:00.930338 env[1298]: time="2025-05-17T00:43:00.930254382Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:00.932019 env[1298]: time="2025-05-17T00:43:00.931973840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:43:00.939503 env[1298]: time="2025-05-17T00:43:00.939454792Z" level=info msg="CreateContainer within sandbox \"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:43:00.963244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119501085.mount: Deactivated successfully. May 17 00:43:00.970711 env[1298]: time="2025-05-17T00:43:00.970643632Z" level=info msg="CreateContainer within sandbox \"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"74562d124a93413025be6b4c0c4aff3c0337ac5c38e502e2767eec12c51d12ff\"" May 17 00:43:00.976802 env[1298]: time="2025-05-17T00:43:00.976755971Z" level=info msg="StartContainer for \"74562d124a93413025be6b4c0c4aff3c0337ac5c38e502e2767eec12c51d12ff\"" May 17 00:43:01.085510 env[1298]: time="2025-05-17T00:43:01.085453213Z" level=info msg="StartContainer for \"74562d124a93413025be6b4c0c4aff3c0337ac5c38e502e2767eec12c51d12ff\" returns successfully" May 17 00:43:01.657132 kubelet[2089]: I0517 00:43:01.655661 2089 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:43:01.657132 kubelet[2089]: I0517 00:43:01.657137 2089 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:43:01.888332 kubelet[2089]: I0517 00:43:01.883815 2089 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xrqvm" podStartSLOduration=30.344844959 podStartE2EDuration="34.879917189s" podCreationTimestamp="2025-05-17 00:42:27 +0000 UTC" firstStartedPulling="2025-05-17 00:42:56.400710491 +0000 UTC m=+47.464525311" lastFinishedPulling="2025-05-17 00:43:00.93578272 +0000 UTC m=+51.999597541" observedRunningTime="2025-05-17 00:43:01.879740769 +0000 UTC m=+52.943555610" watchObservedRunningTime="2025-05-17 00:43:01.879917189 +0000 UTC m=+52.943732039" May 17 00:43:03.354745 env[1298]: time="2025-05-17T00:43:03.354433462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:43:03.561634 env[1298]: time="2025-05-17T00:43:03.561346965Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:03.562546 env[1298]: time="2025-05-17T00:43:03.562459916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:03.567487 kubelet[2089]: E0517 00:43:03.566357 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:03.568081 kubelet[2089]: E0517 00:43:03.567518 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:03.584021 kubelet[2089]: E0517 00:43:03.583918 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8231cfdb1ec4b548cb8c42dc4784a49,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:03.586994 env[1298]: time="2025-05-17T00:43:03.586923166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:43:03.789076 env[1298]: time="2025-05-17T00:43:03.788988199Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:03.790103 env[1298]: time="2025-05-17T00:43:03.789969164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:03.790565 kubelet[2089]: E0517 00:43:03.790503 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:03.790721 kubelet[2089]: E0517 00:43:03.790585 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:03.790794 kubelet[2089]: E0517 00:43:03.790737 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:03.793376 kubelet[2089]: E0517 00:43:03.793320 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:43:04.353514 env[1298]: time="2025-05-17T00:43:04.353440415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:43:04.384642 kubelet[2089]: I0517 00:43:04.384596 2089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:04.442236 systemd[1]: run-containerd-runc-k8s.io-8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84-runc.eTbaXB.mount: Deactivated successfully. May 17 00:43:04.547313 systemd[1]: run-containerd-runc-k8s.io-8e241d9d332a564e35ba48d963d5de573779dc24a3a85da57ae311c4f5af1a84-runc.q1qOLD.mount: Deactivated successfully. May 17 00:43:04.567112 env[1298]: time="2025-05-17T00:43:04.567022410Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:04.569135 env[1298]: time="2025-05-17T00:43:04.568964854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:04.572169 kubelet[2089]: E0517 00:43:04.572092 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:04.573597 kubelet[2089]: E0517 00:43:04.572214 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:04.576317 kubelet[2089]: E0517 00:43:04.576206 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2chb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:04.577540 kubelet[2089]: E0517 00:43:04.577451 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:43:08.519168 kubelet[2089]: I0517 00:43:08.518981 2089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:08.585000 audit[4673]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:08.585000 audit[4673]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff817b5d90 a2=0 a3=7fff817b5d7c items=0 ppid=2190 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:08.593616 kernel: audit: type=1325 audit(1747442588.585:431): table=filter:128 family=2 entries=12 op=nft_register_rule pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:08.593789 kernel: audit: type=1300 audit(1747442588.585:431): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff817b5d90 a2=0 a3=7fff817b5d7c items=0 ppid=2190 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:08.593822 kernel: audit: type=1327 audit(1747442588.585:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:08.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:08.598000 audit[4673]: NETFILTER_CFG table=nat:129 family=2 entries=34 op=nft_register_chain pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:08.598000 audit[4673]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff817b5d90 a2=0 a3=7fff817b5d7c items=0 ppid=2190 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:08.606574 kernel: audit: type=1325 audit(1747442588.598:432): table=nat:129 family=2 entries=34 op=nft_register_chain pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:08.606727 kernel: audit: type=1300 audit(1747442588.598:432): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff817b5d90 a2=0 a3=7fff817b5d7c items=0 ppid=2190 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:08.606756 kernel: audit: type=1327 audit(1747442588.598:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:08.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:09.391929 env[1298]: time="2025-05-17T00:43:09.391507650Z" level=info msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.623 [WARNING][4690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"72820e23-e5e9-4de9-a5fc-0c6f661e245d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb", Pod:"calico-apiserver-7c44f8777-hd5ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02943f259b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.630 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.630 [INFO][4690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" iface="eth0" netns="" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.630 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.630 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.795 [INFO][4697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.798 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.799 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.820 [WARNING][4697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.820 [INFO][4697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.822 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:09.828157 env[1298]: 2025-05-17 00:43:09.825 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.830314 env[1298]: time="2025-05-17T00:43:09.828737212Z" level=info msg="TearDown network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" successfully" May 17 00:43:09.830314 env[1298]: time="2025-05-17T00:43:09.828821506Z" level=info msg="StopPodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" returns successfully" May 17 00:43:09.833488 env[1298]: time="2025-05-17T00:43:09.833432795Z" level=info msg="RemovePodSandbox for \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" May 17 00:43:09.833967 env[1298]: time="2025-05-17T00:43:09.833897326Z" level=info msg="Forcibly stopping sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\"" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.926 [WARNING][4711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"72820e23-e5e9-4de9-a5fc-0c6f661e245d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"17bfcad52796ceb9e4e7251d4448f5094bb2bb784572db48c75f7360032ebeeb", Pod:"calico-apiserver-7c44f8777-hd5ct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02943f259b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.926 [INFO][4711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.926 [INFO][4711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" iface="eth0" netns="" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.927 [INFO][4711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.927 [INFO][4711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.960 [INFO][4718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.960 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.960 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.969 [WARNING][4718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.969 [INFO][4718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" HandleID="k8s-pod-network.99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--hd5ct-eth0" May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.974 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:09.980364 env[1298]: 2025-05-17 00:43:09.977 [INFO][4711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07" May 17 00:43:09.981118 env[1298]: time="2025-05-17T00:43:09.981073784Z" level=info msg="TearDown network for sandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" successfully" May 17 00:43:09.985059 env[1298]: time="2025-05-17T00:43:09.984997233Z" level=info msg="RemovePodSandbox \"99ed8213a5eb524d16816ab75de01840bce8b5bc5da36914cacf97da884f5e07\" returns successfully" May 17 00:43:09.986112 env[1298]: time="2025-05-17T00:43:09.986075542Z" level=info msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.040 [WARNING][4733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b", Pod:"csi-node-driver-xrqvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidca22495beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.041 [INFO][4733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.041 [INFO][4733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" iface="eth0" netns="" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.041 [INFO][4733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.041 [INFO][4733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.070 [INFO][4740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.071 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.071 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.079 [WARNING][4740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.079 [INFO][4740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.082 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.088420 env[1298]: 2025-05-17 00:43:10.084 [INFO][4733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.089252 env[1298]: time="2025-05-17T00:43:10.089210310Z" level=info msg="TearDown network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" successfully" May 17 00:43:10.089399 env[1298]: time="2025-05-17T00:43:10.089379540Z" level=info msg="StopPodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" returns successfully" May 17 00:43:10.090321 env[1298]: time="2025-05-17T00:43:10.090265771Z" level=info msg="RemovePodSandbox for \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" May 17 00:43:10.090621 env[1298]: time="2025-05-17T00:43:10.090538314Z" level=info msg="Forcibly stopping sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\"" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.136 [WARNING][4755] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdbbb2fb-cdbd-4763-b8d2-bfbd1cec2deb", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"c91fdfc6e0379e53e3cb4b69a508e99b9a36ab0b233525d9823e96572065b53b", Pod:"csi-node-driver-xrqvm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidca22495beb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.136 [INFO][4755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.136 [INFO][4755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" iface="eth0" netns="" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.136 [INFO][4755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.136 [INFO][4755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.162 [INFO][4762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.169 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.169 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.178 [WARNING][4762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.178 [INFO][4762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" HandleID="k8s-pod-network.beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-csi--node--driver--xrqvm-eth0" May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.180 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.185636 env[1298]: 2025-05-17 00:43:10.183 [INFO][4755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7" May 17 00:43:10.187091 env[1298]: time="2025-05-17T00:43:10.185706896Z" level=info msg="TearDown network for sandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" successfully" May 17 00:43:10.192017 env[1298]: time="2025-05-17T00:43:10.191938822Z" level=info msg="RemovePodSandbox \"beb9e354ef4ec42c481fa8aa71856a56abc5ac06169d70d19d0ba191201985e7\" returns successfully" May 17 00:43:10.197098 env[1298]: time="2025-05-17T00:43:10.197061511Z" level=info msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.252 [WARNING][4776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"2195973a-24fa-48aa-9e37-bf651df76422", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35", Pod:"goldmane-8f77d7b6c-w9g2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif5633de16d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.253 [INFO][4776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.253 [INFO][4776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" iface="eth0" netns="" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.253 [INFO][4776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.253 [INFO][4776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.290 [INFO][4783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.290 [INFO][4783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.290 [INFO][4783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.299 [WARNING][4783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.299 [INFO][4783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.303 [INFO][4783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.307856 env[1298]: 2025-05-17 00:43:10.305 [INFO][4776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.309649 env[1298]: time="2025-05-17T00:43:10.307906205Z" level=info msg="TearDown network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" successfully" May 17 00:43:10.309649 env[1298]: time="2025-05-17T00:43:10.307942954Z" level=info msg="StopPodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" returns successfully" May 17 00:43:10.309649 env[1298]: time="2025-05-17T00:43:10.308503048Z" level=info msg="RemovePodSandbox for \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" May 17 00:43:10.309649 env[1298]: time="2025-05-17T00:43:10.308534197Z" level=info msg="Forcibly stopping sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\"" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.360 [WARNING][4798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"2195973a-24fa-48aa-9e37-bf651df76422", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"cb56d765337757922d57e27bfcc60128f81eb7a8ce7e9401750ef6d308400d35", Pod:"goldmane-8f77d7b6c-w9g2b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif5633de16d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.360 [INFO][4798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.360 [INFO][4798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" iface="eth0" netns="" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.360 [INFO][4798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.360 [INFO][4798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.390 [INFO][4805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.390 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.390 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.399 [WARNING][4805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.400 [INFO][4805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" HandleID="k8s-pod-network.e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-goldmane--8f77d7b6c--w9g2b-eth0" May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.402 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.407388 env[1298]: 2025-05-17 00:43:10.405 [INFO][4798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e" May 17 00:43:10.408597 env[1298]: time="2025-05-17T00:43:10.408542792Z" level=info msg="TearDown network for sandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" successfully" May 17 00:43:10.413901 env[1298]: time="2025-05-17T00:43:10.413849513Z" level=info msg="RemovePodSandbox \"e0a9180726071de0d8706fe539470f206ae39027f49db9228daa30044f4a946e\" returns successfully" May 17 00:43:10.415213 env[1298]: time="2025-05-17T00:43:10.415174177Z" level=info msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.478 [WARNING][4819] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702", Pod:"calico-apiserver-7c44f8777-nv4xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50c88da89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.479 [INFO][4819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.479 [INFO][4819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" iface="eth0" netns="" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.479 [INFO][4819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.479 [INFO][4819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.515 [INFO][4826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.515 [INFO][4826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.515 [INFO][4826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.525 [WARNING][4826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.525 [INFO][4826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.528 [INFO][4826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.533003 env[1298]: 2025-05-17 00:43:10.530 [INFO][4819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.535105 env[1298]: time="2025-05-17T00:43:10.533556969Z" level=info msg="TearDown network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" successfully" May 17 00:43:10.535105 env[1298]: time="2025-05-17T00:43:10.533594023Z" level=info msg="StopPodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" returns successfully" May 17 00:43:10.535105 env[1298]: time="2025-05-17T00:43:10.534771120Z" level=info msg="RemovePodSandbox for \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" May 17 00:43:10.535593 env[1298]: time="2025-05-17T00:43:10.534812678Z" level=info msg="Forcibly stopping sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\"" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.593 [WARNING][4842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0", GenerateName:"calico-apiserver-7c44f8777-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe17efee-1abf-4bdb-bcc2-aea4268fd8b1", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c44f8777", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"1ba400053fdedd0b3191e58b9da8d96ca03a7d57b06af8a0dd60ddfb267cd702", Pod:"calico-apiserver-7c44f8777-nv4xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic50c88da89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.593 [INFO][4842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.593 [INFO][4842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" iface="eth0" netns="" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.593 [INFO][4842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.593 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.626 [INFO][4849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.626 [INFO][4849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.627 [INFO][4849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.635 [WARNING][4849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.635 [INFO][4849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" HandleID="k8s-pod-network.17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--apiserver--7c44f8777--nv4xw-eth0" May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.639 [INFO][4849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.643449 env[1298]: 2025-05-17 00:43:10.641 [INFO][4842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2" May 17 00:43:10.644513 env[1298]: time="2025-05-17T00:43:10.643487242Z" level=info msg="TearDown network for sandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" successfully" May 17 00:43:10.646637 env[1298]: time="2025-05-17T00:43:10.646487829Z" level=info msg="RemovePodSandbox \"17b282950deaf931dc42a4378adb78f2e1dad49d58c305751f3d4f43f06088f2\" returns successfully" May 17 00:43:10.647149 env[1298]: time="2025-05-17T00:43:10.647122344Z" level=info msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.699 [WARNING][4864] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0", GenerateName:"calico-kube-controllers-7d946d6c9d-", Namespace:"calico-system", SelfLink:"", UID:"8def56ed-36b1-4f37-91fd-6252ab1906d1", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d946d6c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280", Pod:"calico-kube-controllers-7d946d6c9d-wf9nz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccf61d4ea9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.699 [INFO][4864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.699 [INFO][4864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" iface="eth0" netns="" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.699 [INFO][4864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.699 [INFO][4864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.745 [INFO][4872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.745 [INFO][4872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.746 [INFO][4872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.762 [WARNING][4872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.762 [INFO][4872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.764 [INFO][4872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.771325 env[1298]: 2025-05-17 00:43:10.767 [INFO][4864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.772934 env[1298]: time="2025-05-17T00:43:10.772578244Z" level=info msg="TearDown network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" successfully" May 17 00:43:10.772934 env[1298]: time="2025-05-17T00:43:10.772622050Z" level=info msg="StopPodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" returns successfully" May 17 00:43:10.774914 env[1298]: time="2025-05-17T00:43:10.774868420Z" level=info msg="RemovePodSandbox for \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" May 17 00:43:10.775052 env[1298]: time="2025-05-17T00:43:10.774919673Z" level=info msg="Forcibly stopping sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\"" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.836 [WARNING][4887] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0", GenerateName:"calico-kube-controllers-7d946d6c9d-", Namespace:"calico-system", SelfLink:"", UID:"8def56ed-36b1-4f37-91fd-6252ab1906d1", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d946d6c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3fd5f53baa289c88acfaf830d2263674e81956ee6b6cd3a20f51f4d93c4ab280", Pod:"calico-kube-controllers-7d946d6c9d-wf9nz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliccf61d4ea9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.837 [INFO][4887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.837 [INFO][4887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" iface="eth0" netns="" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.837 [INFO][4887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.837 [INFO][4887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.890 [INFO][4895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.890 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.890 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.906 [WARNING][4895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.912 [INFO][4895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" HandleID="k8s-pod-network.8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-calico--kube--controllers--7d946d6c9d--wf9nz-eth0" May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.917 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:10.922259 env[1298]: 2025-05-17 00:43:10.920 [INFO][4887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97" May 17 00:43:10.923600 env[1298]: time="2025-05-17T00:43:10.922535113Z" level=info msg="TearDown network for sandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" successfully" May 17 00:43:10.926391 env[1298]: time="2025-05-17T00:43:10.926337999Z" level=info msg="RemovePodSandbox \"8ff691fa1b89badc600d3ad3cba1d6a5326b0905da7b8bedb453bc0df494ab97\" returns successfully" May 17 00:43:10.927401 env[1298]: time="2025-05-17T00:43:10.927370449Z" level=info msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:10.977 [WARNING][4911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1522b219-3da2-4360-a455-7590bb24be2f", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634", Pod:"coredns-7c65d6cfc9-cfghl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62a066bbf89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:10.977 [INFO][4911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:10.977 [INFO][4911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" iface="eth0" netns="" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:10.977 [INFO][4911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:10.977 [INFO][4911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.014 [INFO][4918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.015 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.015 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.025 [WARNING][4918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.025 [INFO][4918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.031 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.042375 env[1298]: 2025-05-17 00:43:11.039 [INFO][4911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.043610 env[1298]: time="2025-05-17T00:43:11.043514266Z" level=info msg="TearDown network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" successfully" May 17 00:43:11.043610 env[1298]: time="2025-05-17T00:43:11.043572346Z" level=info msg="StopPodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" returns successfully" May 17 00:43:11.044406 env[1298]: time="2025-05-17T00:43:11.044372353Z" level=info msg="RemovePodSandbox for \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" May 17 00:43:11.044650 env[1298]: time="2025-05-17T00:43:11.044595785Z" level=info msg="Forcibly stopping sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\"" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.095 [WARNING][4932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1522b219-3da2-4360-a455-7590bb24be2f", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"4dc3c6a13b2d032791d8b024abc8c3ead618189f2b886c24e0b4593f20f63634", Pod:"coredns-7c65d6cfc9-cfghl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62a066bbf89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.095 [INFO][4932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.095 [INFO][4932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" iface="eth0" netns="" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.095 [INFO][4932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.095 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.127 [INFO][4939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.128 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.128 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.137 [WARNING][4939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.138 [INFO][4939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" HandleID="k8s-pod-network.00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--cfghl-eth0" May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.141 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.165642 env[1298]: 2025-05-17 00:43:11.150 [INFO][4932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc" May 17 00:43:11.165642 env[1298]: time="2025-05-17T00:43:11.154520317Z" level=info msg="TearDown network for sandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" successfully" May 17 00:43:11.165642 env[1298]: time="2025-05-17T00:43:11.158302839Z" level=info msg="RemovePodSandbox \"00199edd9a4b85a9265ed52e2a78992894dec039dd74189378ff293682a574cc\" returns successfully" May 17 00:43:11.165642 env[1298]: time="2025-05-17T00:43:11.159053460Z" level=info msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.232 [WARNING][4953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab", Pod:"coredns-7c65d6cfc9-rghhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c477a7f572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.233 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.233 [INFO][4953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" iface="eth0" netns="" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.233 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.233 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.274 [INFO][4960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.274 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.274 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.283 [WARNING][4960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.283 [INFO][4960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.286 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.292880 env[1298]: 2025-05-17 00:43:11.289 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.294510 env[1298]: time="2025-05-17T00:43:11.292860749Z" level=info msg="TearDown network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" successfully" May 17 00:43:11.294510 env[1298]: time="2025-05-17T00:43:11.293668753Z" level=info msg="StopPodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" returns successfully" May 17 00:43:11.294510 env[1298]: time="2025-05-17T00:43:11.294452639Z" level=info msg="RemovePodSandbox for \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" May 17 00:43:11.294770 env[1298]: time="2025-05-17T00:43:11.294507361Z" level=info msg="Forcibly stopping sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\"" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.349 [WARNING][4976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b0969e4-be19-4aa6-ac52-0eb151d2ba1f", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-9c3fefbd06", ContainerID:"3410829f7781980e90fed073d3de5cf9d97909fb5bd5264aec15d61ac12bb7ab", Pod:"coredns-7c65d6cfc9-rghhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c477a7f572", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.350 [INFO][4976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.350 [INFO][4976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" iface="eth0" netns="" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.350 [INFO][4976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.350 [INFO][4976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.390 [INFO][4983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.390 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.391 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.399 [WARNING][4983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.399 [INFO][4983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" HandleID="k8s-pod-network.ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-coredns--7c65d6cfc9--rghhh-eth0" May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.401 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.407013 env[1298]: 2025-05-17 00:43:11.404 [INFO][4976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5" May 17 00:43:11.408690 env[1298]: time="2025-05-17T00:43:11.408629294Z" level=info msg="TearDown network for sandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" successfully" May 17 00:43:11.412449 env[1298]: time="2025-05-17T00:43:11.412388892Z" level=info msg="RemovePodSandbox \"ef4f489cee285fe66ebca1d71e08ea88f2faf6d8b9fff9b7f4ee57b52a0d3bc5\" returns successfully" May 17 00:43:11.413386 env[1298]: time="2025-05-17T00:43:11.413279388Z" level=info msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.481 [WARNING][4998] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.481 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.481 [INFO][4998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" iface="eth0" netns="" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.481 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.481 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.516 [INFO][5005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.517 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.517 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.525 [WARNING][5005] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.526 [INFO][5005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.530 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.535632 env[1298]: 2025-05-17 00:43:11.533 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.535632 env[1298]: time="2025-05-17T00:43:11.535607749Z" level=info msg="TearDown network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" successfully" May 17 00:43:11.535632 env[1298]: time="2025-05-17T00:43:11.535637971Z" level=info msg="StopPodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" returns successfully" May 17 00:43:11.536895 env[1298]: time="2025-05-17T00:43:11.536854711Z" level=info msg="RemovePodSandbox for \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" May 17 00:43:11.537051 env[1298]: time="2025-05-17T00:43:11.536999573Z" level=info msg="Forcibly stopping sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\"" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.588 [WARNING][5019] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" WorkloadEndpoint="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.588 [INFO][5019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.588 [INFO][5019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" iface="eth0" netns="" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.588 [INFO][5019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.588 [INFO][5019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.627 [INFO][5026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.627 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.627 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.638 [WARNING][5026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.638 [INFO][5026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" HandleID="k8s-pod-network.446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" Workload="ci--3510.3.7--n--9c3fefbd06-k8s-whisker--c894c5f7c--mgcgc-eth0" May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.641 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:11.646520 env[1298]: 2025-05-17 00:43:11.644 [INFO][5019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b" May 17 00:43:11.647501 env[1298]: time="2025-05-17T00:43:11.647459418Z" level=info msg="TearDown network for sandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" successfully" May 17 00:43:11.650863 env[1298]: time="2025-05-17T00:43:11.650796365Z" level=info msg="RemovePodSandbox \"446f86881c3e9e37e01b739181f12a97291a39a8140758898e1e05218f0b370b\" returns successfully" May 17 00:43:12.433777 systemd[1]: run-containerd-runc-k8s.io-1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8-runc.QU7uFi.mount: Deactivated successfully. May 17 00:43:16.044196 systemd[1]: Started sshd@7-137.184.190.96:22-147.75.109.163:46080.service. May 17 00:43:16.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.190.96:22-147.75.109.163:46080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:16.054696 kernel: audit: type=1130 audit(1747442596.048:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.190.96:22-147.75.109.163:46080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:16.196000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.201566 kernel: audit: type=1101 audit(1747442596.196:434): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.201678 sshd[5059]: Accepted publickey for core from 147.75.109.163 port 46080 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:16.202000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.209817 kernel: audit: type=1103 audit(1747442596.202:435): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.210009 kernel: audit: type=1006 audit(1747442596.203:436): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 17 00:43:16.210057 kernel: audit: type=1300 audit(1747442596.203:436): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf727bf90 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:16.203000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf727bf90 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:16.216706 kernel: audit: type=1327 audit(1747442596.203:436): proctitle=737368643A20636F7265205B707269765D May 17 00:43:16.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:16.215886 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:16.261485 systemd-logind[1287]: New session 8 of user core. May 17 00:43:16.261976 systemd[1]: Started session-8.scope. May 17 00:43:16.272000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.276000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.282944 kernel: audit: type=1105 audit(1747442596.272:437): pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.283549 kernel: audit: type=1103 audit(1747442596.276:438): pid=5062 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:16.517474 kubelet[2089]: E0517 00:43:16.517418 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:43:17.178025 sshd[5059]: pam_unix(sshd:session): session closed for user core May 17 00:43:17.181000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:17.187056 kernel: audit: type=1106 audit(1747442597.181:439): pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:17.184919 systemd[1]: sshd@7-137.184.190.96:22-147.75.109.163:46080.service: Deactivated successfully. May 17 00:43:17.181000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:17.191829 kernel: audit: type=1104 audit(1747442597.181:440): pid=5059 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:17.185924 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:43:17.186596 systemd-logind[1287]: Session 8 logged out. Waiting for processes to exit. May 17 00:43:17.191716 systemd-logind[1287]: Removed session 8. May 17 00:43:17.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-137.184.190.96:22-147.75.109.163:46080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:17.393056 kubelet[2089]: E0517 00:43:17.392981 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:43:18.356966 kubelet[2089]: E0517 00:43:18.356901 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:18.357696 kubelet[2089]: E0517 00:43:18.357485 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:22.183911 systemd[1]: Started sshd@8-137.184.190.96:22-147.75.109.163:41654.service. May 17 00:43:22.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.96:22-147.75.109.163:41654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:22.186210 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:22.188703 kernel: audit: type=1130 audit(1747442602.183:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.96:22-147.75.109.163:41654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:22.243000 audit[5075]: USER_ACCT pid=5075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.248880 sshd[5075]: Accepted publickey for core from 147.75.109.163 port 41654 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:22.249529 kernel: audit: type=1101 audit(1747442602.243:443): pid=5075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.249000 audit[5075]: CRED_ACQ pid=5075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.251517 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:22.257084 kernel: audit: type=1103 audit(1747442602.249:444): pid=5075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.257220 kernel: audit: type=1006 audit(1747442602.249:445): pid=5075 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 17 00:43:22.257248 kernel: audit: type=1300 audit(1747442602.249:445): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc7a1990 a2=3 a3=0 items=0 ppid=1 pid=5075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:22.249000 audit[5075]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc7a1990 a2=3 a3=0 items=0 ppid=1 pid=5075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:22.259316 systemd[1]: Started session-9.scope. May 17 00:43:22.260968 kernel: audit: type=1327 audit(1747442602.249:445): proctitle=737368643A20636F7265205B707269765D May 17 00:43:22.249000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:22.260495 systemd-logind[1287]: New session 9 of user core. May 17 00:43:22.266000 audit[5075]: USER_START pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.278396 kernel: audit: type=1105 audit(1747442602.266:446): pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.278000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.283397 kernel: audit: type=1103 audit(1747442602.278:447): pid=5078 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.491114 sshd[5075]: pam_unix(sshd:session): session closed for user core May 17 00:43:22.493000 audit[5075]: USER_END pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.493000 audit[5075]: CRED_DISP pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.497631 systemd[1]: sshd@8-137.184.190.96:22-147.75.109.163:41654.service: Deactivated successfully. May 17 00:43:22.498953 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:43:22.501396 kernel: audit: type=1106 audit(1747442602.493:448): pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.501545 kernel: audit: type=1104 audit(1747442602.493:449): pid=5075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:22.501494 systemd-logind[1287]: Session 9 logged out. Waiting for processes to exit. May 17 00:43:22.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-137.184.190.96:22-147.75.109.163:41654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:22.503095 systemd-logind[1287]: Removed session 9. May 17 00:43:27.497222 systemd[1]: Started sshd@9-137.184.190.96:22-147.75.109.163:41662.service. May 17 00:43:27.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.96:22-147.75.109.163:41662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.499070 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:27.499155 kernel: audit: type=1130 audit(1747442607.496:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.96:22-147.75.109.163:41662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.565000 audit[5088]: USER_ACCT pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.568552 sshd[5088]: Accepted publickey for core from 147.75.109.163 port 41662 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:27.569000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.573905 kernel: audit: type=1101 audit(1747442607.565:452): pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.574058 kernel: audit: type=1103 audit(1747442607.569:453): pid=5088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.574090 kernel: audit: type=1006 audit(1747442607.569:454): pid=5088 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 00:43:27.569000 audit[5088]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe53b5f300 a2=3 a3=0 items=0 ppid=1 pid=5088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.576641 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:27.580617 kernel: audit: type=1300 audit(1747442607.569:454): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe53b5f300 a2=3 a3=0 items=0 ppid=1 pid=5088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.580767 kernel: audit: type=1327 audit(1747442607.569:454): proctitle=737368643A20636F7265205B707269765D May 17 00:43:27.569000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:27.587845 systemd[1]: Started session-10.scope. May 17 00:43:27.588866 systemd-logind[1287]: New session 10 of user core. May 17 00:43:27.594000 audit[5088]: USER_START pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.598000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.603244 kernel: audit: type=1105 audit(1747442607.594:455): pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.603482 kernel: audit: type=1103 audit(1747442607.598:456): pid=5091 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.807777 sshd[5088]: pam_unix(sshd:session): session closed for user core May 17 00:43:27.808000 audit[5088]: USER_END pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.810000 audit[5088]: CRED_DISP pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.817107 kernel: audit: type=1106 audit(1747442607.808:457): pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.817275 kernel: audit: type=1104 audit(1747442607.810:458): pid=5088 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:27.815366 systemd[1]: sshd@9-137.184.190.96:22-147.75.109.163:41662.service: Deactivated successfully. May 17 00:43:27.816608 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:43:27.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-137.184.190.96:22-147.75.109.163:41662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:27.818068 systemd-logind[1287]: Session 10 logged out. Waiting for processes to exit. May 17 00:43:27.819416 systemd-logind[1287]: Removed session 10. May 17 00:43:31.368170 env[1298]: time="2025-05-17T00:43:31.367817864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:43:31.601582 env[1298]: time="2025-05-17T00:43:31.601234074Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:31.602661 env[1298]: time="2025-05-17T00:43:31.602521788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:31.606239 kubelet[2089]: E0517 00:43:31.604959 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:31.606717 kubelet[2089]: E0517 00:43:31.606265 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:31.606820 env[1298]: time="2025-05-17T00:43:31.606782909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:43:31.612142 kubelet[2089]: E0517 00:43:31.612057 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2chb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:31.613740 kubelet[2089]: E0517 00:43:31.613691 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:43:31.822249 env[1298]: time="2025-05-17T00:43:31.821969060Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:31.823662 env[1298]: time="2025-05-17T00:43:31.823584632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:31.824215 kubelet[2089]: E0517 00:43:31.823884 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:31.824215 kubelet[2089]: E0517 00:43:31.823947 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:31.824389 kubelet[2089]: E0517 00:43:31.824090 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8231cfdb1ec4b548cb8c42dc4784a49,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:31.830446 env[1298]: time="2025-05-17T00:43:31.830402503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:43:32.060414 env[1298]: time="2025-05-17T00:43:32.060266451Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:32.061428 env[1298]: time="2025-05-17T00:43:32.061339933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:32.061774 kubelet[2089]: E0517 00:43:32.061733 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:32.061916 kubelet[2089]: E0517 00:43:32.061890 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:32.062248 kubelet[2089]: E0517 00:43:32.062128 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:32.063810 kubelet[2089]: E0517 00:43:32.063757 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:43:32.815770 systemd[1]: Started sshd@10-137.184.190.96:22-147.75.109.163:57018.service. May 17 00:43:32.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.96:22-147.75.109.163:57018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:32.819009 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:32.819081 kernel: audit: type=1130 audit(1747442612.817:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.96:22-147.75.109.163:57018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:32.944000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.945582 sshd[5127]: Accepted publickey for core from 147.75.109.163 port 57018 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:32.949401 kernel: audit: type=1101 audit(1747442612.944:461): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.949000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.954389 kernel: audit: type=1103 audit(1747442612.949:462): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.954564 kernel: audit: type=1006 audit(1747442612.949:463): pid=5127 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 17 00:43:32.949000 audit[5127]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee9285080 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:32.959912 kernel: audit: type=1300 audit(1747442612.949:463): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee9285080 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:32.960102 kernel: audit: type=1327 audit(1747442612.949:463): proctitle=737368643A20636F7265205B707269765D May 17 00:43:32.949000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:32.961981 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:32.972249 systemd[1]: Started session-11.scope. May 17 00:43:32.972835 systemd-logind[1287]: New session 11 of user core. May 17 00:43:32.979000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.982000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.986267 kernel: audit: type=1105 audit(1747442612.979:464): pid=5127 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:32.986527 kernel: audit: type=1103 audit(1747442612.982:465): pid=5130 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.241913 sshd[5127]: pam_unix(sshd:session): session closed for user core May 17 00:43:33.246000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.246313 systemd[1]: Started sshd@11-137.184.190.96:22-147.75.109.163:57022.service. May 17 00:43:33.252577 kernel: audit: type=1106 audit(1747442613.246:466): pid=5127 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.256611 kernel: audit: type=1130 audit(1747442613.251:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.96:22-147.75.109.163:57022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.96:22-147.75.109.163:57022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.252000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.257511 systemd[1]: sshd@10-137.184.190.96:22-147.75.109.163:57018.service: Deactivated successfully. May 17 00:43:33.259646 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:43:33.260263 systemd-logind[1287]: Session 11 logged out. Waiting for processes to exit. May 17 00:43:33.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-137.184.190.96:22-147.75.109.163:57018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.262537 systemd-logind[1287]: Removed session 11. May 17 00:43:33.319000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.320379 sshd[5138]: Accepted publickey for core from 147.75.109.163 port 57022 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:33.322000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.323000 audit[5138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b525b90 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:33.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:33.324016 sshd[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:33.331396 systemd[1]: Started session-12.scope. May 17 00:43:33.331713 systemd-logind[1287]: New session 12 of user core. May 17 00:43:33.340000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.343000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.592833 sshd[5138]: pam_unix(sshd:session): session closed for user core May 17 00:43:33.598655 systemd[1]: Started sshd@12-137.184.190.96:22-147.75.109.163:57034.service. May 17 00:43:33.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.190.96:22-147.75.109.163:57034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.611000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.613000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-137.184.190.96:22-147.75.109.163:57022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.615427 systemd[1]: sshd@11-137.184.190.96:22-147.75.109.163:57022.service: Deactivated successfully. May 17 00:43:33.616594 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:43:33.618477 systemd-logind[1287]: Session 12 logged out. Waiting for processes to exit. May 17 00:43:33.625192 systemd-logind[1287]: Removed session 12. May 17 00:43:33.695000 audit[5148]: USER_ACCT pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.696458 sshd[5148]: Accepted publickey for core from 147.75.109.163 port 57034 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:33.697000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.697000 audit[5148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeda6332a0 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:33.697000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:33.698802 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:33.706128 systemd[1]: Started session-13.scope. May 17 00:43:33.706642 systemd-logind[1287]: New session 13 of user core. May 17 00:43:33.713000 audit[5148]: USER_START pid=5148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.716000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.893832 sshd[5148]: pam_unix(sshd:session): session closed for user core May 17 00:43:33.894000 audit[5148]: USER_END pid=5148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.895000 audit[5148]: CRED_DISP pid=5148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:33.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-137.184.190.96:22-147.75.109.163:57034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:33.898398 systemd[1]: sshd@12-137.184.190.96:22-147.75.109.163:57034.service: Deactivated successfully. May 17 00:43:33.900518 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:43:33.901901 systemd-logind[1287]: Session 13 logged out. Waiting for processes to exit. May 17 00:43:33.903396 systemd-logind[1287]: Removed session 13. May 17 00:43:38.900558 systemd[1]: Started sshd@13-137.184.190.96:22-147.75.109.163:34366.service. May 17 00:43:38.910094 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:43:38.910409 kernel: audit: type=1130 audit(1747442618.901:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.190.96:22-147.75.109.163:34366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:38.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.190.96:22-147.75.109.163:34366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:38.979813 kernel: audit: type=1101 audit(1747442618.969:488): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:38.979973 kernel: audit: type=1103 audit(1747442618.974:489): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:38.969000 audit[5190]: USER_ACCT pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:38.974000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:38.976545 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:38.980868 sshd[5190]: Accepted publickey for core from 147.75.109.163 port 34366 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:38.974000 audit[5190]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb06a1520 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:38.988841 kernel: audit: type=1006 audit(1747442618.974:490): pid=5190 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 17 00:43:38.989104 kernel: audit: type=1300 audit(1747442618.974:490): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb06a1520 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:38.991325 kernel: audit: type=1327 audit(1747442618.974:490): proctitle=737368643A20636F7265205B707269765D May 17 00:43:38.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:38.994344 systemd-logind[1287]: New session 14 of user core. May 17 00:43:38.996223 systemd[1]: Started session-14.scope. May 17 00:43:39.005000 audit[5190]: USER_START pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.008000 audit[5193]: CRED_ACQ pid=5193 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.015995 kernel: audit: type=1105 audit(1747442619.005:491): pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.016197 kernel: audit: type=1103 audit(1747442619.008:492): pid=5193 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.198059 sshd[5190]: pam_unix(sshd:session): session closed for user core May 17 00:43:39.200000 audit[5190]: USER_END pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.200000 audit[5190]: CRED_DISP pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.204632 systemd[1]: sshd@13-137.184.190.96:22-147.75.109.163:34366.service: Deactivated successfully. May 17 00:43:39.209195 kernel: audit: type=1106 audit(1747442619.200:493): pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.209359 kernel: audit: type=1104 audit(1747442619.200:494): pid=5190 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:39.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-137.184.190.96:22-147.75.109.163:34366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:39.209870 systemd-logind[1287]: Session 14 logged out. Waiting for processes to exit. May 17 00:43:39.210060 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:43:39.212223 systemd-logind[1287]: Removed session 14. May 17 00:43:42.417070 systemd[1]: run-containerd-runc-k8s.io-1aff888bb8a7387ef8b1be822f540fa496d11f752b752022a8297e522801b1d8-runc.okv7vx.mount: Deactivated successfully. May 17 00:43:43.356270 kubelet[2089]: E0517 00:43:43.356167 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:43.358032 kubelet[2089]: E0517 00:43:43.357987 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:43:44.203243 systemd[1]: Started sshd@14-137.184.190.96:22-147.75.109.163:34382.service. May 17 00:43:44.204827 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:44.204940 kernel: audit: type=1130 audit(1747442624.202:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.96:22-147.75.109.163:34382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:44.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.96:22-147.75.109.163:34382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:44.289000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.291215 sshd[5225]: Accepted publickey for core from 147.75.109.163 port 34382 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:44.294320 kernel: audit: type=1101 audit(1747442624.289:497): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.293000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.296407 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:44.299985 kernel: audit: type=1103 audit(1747442624.293:498): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.300114 kernel: audit: type=1006 audit(1747442624.293:499): pid=5225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:43:44.300164 kernel: audit: type=1300 audit(1747442624.293:499): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe3125100 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:44.293000 audit[5225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe3125100 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:44.293000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:44.304647 kernel: audit: type=1327 audit(1747442624.293:499): proctitle=737368643A20636F7265205B707269765D May 17 00:43:44.305787 systemd[1]: Started session-15.scope. May 17 00:43:44.306151 systemd-logind[1287]: New session 15 of user core. May 17 00:43:44.312000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.315000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.322669 kernel: audit: type=1105 audit(1747442624.312:500): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.322791 kernel: audit: type=1103 audit(1747442624.315:501): pid=5228 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.588089 sshd[5225]: pam_unix(sshd:session): session closed for user core May 17 00:43:44.590000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.592000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.601849 kernel: audit: type=1106 audit(1747442624.590:502): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.602004 kernel: audit: type=1104 audit(1747442624.592:503): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:44.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-137.184.190.96:22-147.75.109.163:34382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:44.599496 systemd[1]: sshd@14-137.184.190.96:22-147.75.109.163:34382.service: Deactivated successfully. May 17 00:43:44.601270 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:43:44.601438 systemd-logind[1287]: Session 15 logged out. Waiting for processes to exit. May 17 00:43:44.603489 systemd-logind[1287]: Removed session 15. May 17 00:43:46.360159 kubelet[2089]: E0517 00:43:46.360098 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:43:46.363986 kubelet[2089]: E0517 00:43:46.363735 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:43:49.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.96:22-147.75.109.163:44448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:49.594164 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:49.594213 kernel: audit: type=1130 audit(1747442629.591:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.96:22-147.75.109.163:44448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:49.592891 systemd[1]: Started sshd@15-137.184.190.96:22-147.75.109.163:44448.service. May 17 00:43:49.669000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.671260 sshd[5237]: Accepted publickey for core from 147.75.109.163 port 44448 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:49.674373 kernel: audit: type=1101 audit(1747442629.669:506): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.673000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.675856 sshd[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:49.687365 kernel: audit: type=1103 audit(1747442629.673:507): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.687563 kernel: audit: type=1006 audit(1747442629.673:508): pid=5237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 00:43:49.687624 kernel: audit: type=1300 audit(1747442629.673:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9378cf30 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:49.673000 audit[5237]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9378cf30 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:49.684380 systemd[1]: Started session-16.scope. May 17 00:43:49.673000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:49.686079 systemd-logind[1287]: New session 16 of user core. May 17 00:43:49.690389 kernel: audit: type=1327 audit(1747442629.673:508): proctitle=737368643A20636F7265205B707269765D May 17 00:43:49.694000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.694000 audit[5240]: CRED_ACQ pid=5240 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.703878 kernel: audit: type=1105 audit(1747442629.694:509): pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.704163 kernel: audit: type=1103 audit(1747442629.694:510): pid=5240 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.879652 sshd[5237]: pam_unix(sshd:session): session closed for user core May 17 00:43:49.881000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.886345 kernel: audit: type=1106 audit(1747442629.881:511): pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.885000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.888607 systemd[1]: sshd@15-137.184.190.96:22-147.75.109.163:44448.service: Deactivated successfully. May 17 00:43:49.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-137.184.190.96:22-147.75.109.163:44448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:49.890339 kernel: audit: type=1104 audit(1747442629.885:512): pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:49.890724 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:43:49.891184 systemd-logind[1287]: Session 16 logged out. Waiting for processes to exit. May 17 00:43:49.892264 systemd-logind[1287]: Removed session 16. May 17 00:43:54.884726 systemd[1]: Started sshd@16-137.184.190.96:22-147.75.109.163:44454.service. May 17 00:43:54.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.96:22-147.75.109.163:44454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:54.886807 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:43:54.886903 kernel: audit: type=1130 audit(1747442634.884:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.96:22-147.75.109.163:44454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.011000 audit[5250]: USER_ACCT pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.012771 sshd[5250]: Accepted publickey for core from 147.75.109.163 port 44454 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:55.017336 kernel: audit: type=1101 audit(1747442635.011:515): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.017000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.024345 kernel: audit: type=1103 audit(1747442635.017:516): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.027340 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:55.037447 kernel: audit: type=1006 audit(1747442635.017:517): pid=5250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 17 00:43:55.039001 systemd-logind[1287]: New session 17 of user core. May 17 00:43:55.040771 systemd[1]: Started session-17.scope. May 17 00:43:55.017000 audit[5250]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaaca13f0 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:55.050367 kernel: audit: type=1300 audit(1747442635.017:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaaca13f0 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:55.017000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:55.054333 kernel: audit: type=1327 audit(1747442635.017:517): proctitle=737368643A20636F7265205B707269765D May 17 00:43:55.066731 kernel: audit: type=1105 audit(1747442635.056:518): pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.056000 audit[5250]: USER_START pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.058000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.081421 kernel: audit: type=1103 audit(1747442635.058:519): pid=5253 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.527408 sshd[5250]: pam_unix(sshd:session): session closed for user core May 17 00:43:55.529635 systemd[1]: Started sshd@17-137.184.190.96:22-147.75.109.163:44468.service. May 17 00:43:55.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.190.96:22-147.75.109.163:44468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.541489 kernel: audit: type=1130 audit(1747442635.529:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.190.96:22-147.75.109.163:44468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.541988 kernel: audit: type=1106 audit(1747442635.534:521): pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.534000 audit[5250]: USER_END pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.543452 systemd[1]: sshd@16-137.184.190.96:22-147.75.109.163:44454.service: Deactivated successfully. May 17 00:43:55.534000 audit[5250]: CRED_DISP pid=5250 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-137.184.190.96:22-147.75.109.163:44454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.560063 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:43:55.562019 systemd-logind[1287]: Session 17 logged out. Waiting for processes to exit. May 17 00:43:55.564857 systemd-logind[1287]: Removed session 17. May 17 00:43:55.631000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.633386 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 44468 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:55.633000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.633000 audit[5261]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfa621e90 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:55.633000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:55.635820 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:55.651961 systemd-logind[1287]: New session 18 of user core. May 17 00:43:55.653747 systemd[1]: Started session-18.scope. May 17 00:43:55.666000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:55.669000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.074175 sshd[5261]: pam_unix(sshd:session): session closed for user core May 17 00:43:56.074000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.074000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.190.96:22-147.75.109.163:44482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:56.080269 systemd[1]: Started sshd@18-137.184.190.96:22-147.75.109.163:44482.service. May 17 00:43:56.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-137.184.190.96:22-147.75.109.163:44468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:56.084392 systemd[1]: sshd@17-137.184.190.96:22-147.75.109.163:44468.service: Deactivated successfully. May 17 00:43:56.088805 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:43:56.091625 systemd-logind[1287]: Session 18 logged out. Waiting for processes to exit. May 17 00:43:56.098746 systemd-logind[1287]: Removed session 18. May 17 00:43:56.151000 audit[5272]: USER_ACCT pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.153385 sshd[5272]: Accepted publickey for core from 147.75.109.163 port 44482 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:56.153000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.153000 audit[5272]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe72d739c0 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:56.153000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:56.155145 sshd[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:56.161526 systemd[1]: Started session-19.scope. May 17 00:43:56.162499 systemd-logind[1287]: New session 19 of user core. May 17 00:43:56.168000 audit[5272]: USER_START pid=5272 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:56.170000 audit[5277]: CRED_ACQ pid=5277 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:43:58.470357 kubelet[2089]: E0517 00:43:58.462442 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:44:00.309663 sshd[5272]: pam_unix(sshd:session): session closed for user core May 17 00:44:00.368022 systemd[1]: Started sshd@19-137.184.190.96:22-147.75.109.163:52148.service. May 17 00:44:00.407195 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:44:00.407366 kernel: audit: type=1130 audit(1747442640.370:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.190.96:22-147.75.109.163:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:00.410056 kernel: audit: type=1106 audit(1747442640.373:539): pid=5272 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.410212 kernel: audit: type=1104 audit(1747442640.376:540): pid=5272 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.410324 kernel: audit: type=1131 audit(1747442640.397:541): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.190.96:22-147.75.109.163:44482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:00.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.190.96:22-147.75.109.163:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:00.373000 audit[5272]: USER_END pid=5272 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.376000 audit[5272]: CRED_DISP pid=5272 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-137.184.190.96:22-147.75.109.163:44482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:00.398856 systemd[1]: sshd@18-137.184.190.96:22-147.75.109.163:44482.service: Deactivated successfully. May 17 00:44:00.401575 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:44:00.414794 systemd-logind[1287]: Session 19 logged out. Waiting for processes to exit. May 17 00:44:00.437743 systemd-logind[1287]: Removed session 19. May 17 00:44:00.795000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.802330 kernel: audit: type=1101 audit(1747442640.795:542): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.805040 sshd[5286]: Accepted publickey for core from 147.75.109.163 port 52148 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:00.805000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.812348 kernel: audit: type=1103 audit(1747442640.805:543): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.810000 audit[5286]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb1961b90 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.822338 kernel: audit: type=1006 audit(1747442640.810:544): pid=5286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 17 00:44:00.822526 kernel: audit: type=1300 audit(1747442640.810:544): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb1961b90 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.822631 kernel: audit: type=1327 audit(1747442640.810:544): proctitle=737368643A20636F7265205B707269765D May 17 00:44:00.810000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:00.824893 kernel: audit: type=1325 audit(1747442640.812:545): table=filter:130 family=2 entries=24 op=nft_register_rule pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:00.812000 audit[5291]: NETFILTER_CFG table=filter:130 family=2 entries=24 op=nft_register_rule pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:00.812000 audit[5291]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7fffae99b2e0 a2=0 a3=7fffae99b2cc items=0 ppid=2190 pid=5291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:00.827000 audit[5291]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:00.827000 audit[5291]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffae99b2e0 a2=0 a3=0 items=0 ppid=2190 pid=5291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:00.824766 sshd[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:00.854505 systemd-logind[1287]: New session 20 of user core. May 17 00:44:00.855219 systemd[1]: Started session-20.scope. May 17 00:44:00.883000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.888000 audit[5294]: CRED_ACQ pid=5294 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:00.934000 audit[5295]: NETFILTER_CFG table=filter:132 family=2 entries=36 op=nft_register_rule pid=5295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:00.934000 audit[5295]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffd80100840 a2=0 a3=7ffd8010082c items=0 ppid=2190 pid=5295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:00.949000 audit[5295]: NETFILTER_CFG table=nat:133 family=2 entries=22 op=nft_register_rule pid=5295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:00.949000 audit[5295]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd80100840 a2=0 a3=0 items=0 ppid=2190 pid=5295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:00.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:03.107462 kubelet[2089]: E0517 00:44:03.102397 2089 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.482s" May 17 00:44:03.143507 kubelet[2089]: E0517 00:44:03.143448 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:44:03.247287 kubelet[2089]: E0517 00:44:03.246638 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:03.251508 kubelet[2089]: E0517 00:44:03.251435 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:03.979050 sshd[5286]: pam_unix(sshd:session): session closed for user core May 17 00:44:03.997658 systemd[1]: Started sshd@20-137.184.190.96:22-147.75.109.163:52158.service. May 17 00:44:04.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.190.96:22-147.75.109.163:52158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:04.011000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.011000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.021814 systemd[1]: sshd@19-137.184.190.96:22-147.75.109.163:52148.service: Deactivated successfully. May 17 00:44:04.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-137.184.190.96:22-147.75.109.163:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:04.023790 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:44:04.023935 systemd-logind[1287]: Session 20 logged out. Waiting for processes to exit. May 17 00:44:04.034867 systemd-logind[1287]: Removed session 20. May 17 00:44:04.134000 audit[5306]: USER_ACCT pid=5306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.136245 sshd[5306]: Accepted publickey for core from 147.75.109.163 port 52158 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:04.137000 audit[5306]: CRED_ACQ pid=5306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.137000 audit[5306]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc632e5380 a2=3 a3=0 items=0 ppid=1 pid=5306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:04.137000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:04.139641 sshd[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:04.148628 systemd-logind[1287]: New session 21 of user core. May 17 00:44:04.149272 systemd[1]: Started session-21.scope. May 17 00:44:04.155000 audit[5306]: USER_START pid=5306 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.158000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.355524 sshd[5306]: pam_unix(sshd:session): session closed for user core May 17 00:44:04.356000 audit[5306]: USER_END pid=5306 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.357000 audit[5306]: CRED_DISP pid=5306 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:04.361461 systemd[1]: sshd@20-137.184.190.96:22-147.75.109.163:52158.service: Deactivated successfully. May 17 00:44:04.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-137.184.190.96:22-147.75.109.163:52158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:04.363494 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:44:04.364469 systemd-logind[1287]: Session 21 logged out. Waiting for processes to exit. May 17 00:44:04.366427 systemd-logind[1287]: Removed session 21. May 17 00:44:07.957744 kernel: kauditd_printk_skb: 27 callbacks suppressed May 17 00:44:07.963948 kernel: audit: type=1325 audit(1747442647.952:563): table=filter:134 family=2 entries=24 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:07.964407 kernel: audit: type=1300 audit(1747442647.952:563): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffef61dad30 a2=0 a3=7ffef61dad1c items=0 ppid=2190 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:07.952000 audit[5341]: NETFILTER_CFG table=filter:134 family=2 entries=24 op=nft_register_rule pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:07.952000 audit[5341]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffef61dad30 a2=0 a3=7ffef61dad1c items=0 ppid=2190 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:07.969607 kernel: audit: type=1327 audit(1747442647.952:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:07.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:07.970000 audit[5341]: NETFILTER_CFG table=nat:135 family=2 entries=106 op=nft_register_chain pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:07.975820 kernel: audit: type=1325 audit(1747442647.970:564): table=nat:135 family=2 entries=106 op=nft_register_chain pid=5341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:07.970000 audit[5341]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffef61dad30 a2=0 a3=7ffef61dad1c items=0 ppid=2190 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:07.981377 kernel: audit: type=1300 audit(1747442647.970:564): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffef61dad30 a2=0 a3=7ffef61dad1c items=0 ppid=2190 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:07.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:07.986365 kernel: audit: type=1327 audit(1747442647.970:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:09.359829 systemd[1]: Started sshd@21-137.184.190.96:22-147.75.109.163:36116.service. May 17 00:44:09.364222 kernel: audit: type=1130 audit(1747442649.358:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.96:22-147.75.109.163:36116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:09.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.96:22-147.75.109.163:36116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:09.500000 audit[5349]: USER_ACCT pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:09.503334 sshd[5349]: Accepted publickey for core from 147.75.109.163 port 36116 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:09.506069 kernel: audit: type=1101 audit(1747442649.500:566): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:09.506183 kernel: audit: type=1103 audit(1747442649.504:567): pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:09.504000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:09.507637 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:09.520349 kernel: audit: type=1006 audit(1747442649.504:568): pid=5349 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 17 00:44:09.519875 systemd[1]: Started session-22.scope. May 17 00:44:09.521714 systemd-logind[1287]: New session 22 of user core. May 17 00:44:09.504000 audit[5349]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc63c034b0 a2=3 a3=0 items=0 ppid=1 pid=5349 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:09.504000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:09.546000 audit[5349]: USER_START pid=5349 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:09.549000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:10.089744 sshd[5349]: pam_unix(sshd:session): session closed for user core May 17 00:44:10.090000 audit[5349]: USER_END pid=5349 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:10.090000 audit[5349]: CRED_DISP pid=5349 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:10.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-137.184.190.96:22-147.75.109.163:36116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:10.095570 systemd-logind[1287]: Session 22 logged out. Waiting for processes to exit. May 17 00:44:10.096938 systemd[1]: sshd@21-137.184.190.96:22-147.75.109.163:36116.service: Deactivated successfully. May 17 00:44:10.098014 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:44:10.100542 systemd-logind[1287]: Removed session 22. May 17 00:44:13.385044 env[1298]: time="2025-05-17T00:44:13.384791634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:44:13.670070 env[1298]: time="2025-05-17T00:44:13.669907307Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:13.671005 env[1298]: time="2025-05-17T00:44:13.670943931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:13.679261 kubelet[2089]: E0517 00:44:13.678033 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:13.681502 kubelet[2089]: E0517 00:44:13.681171 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:13.695006 kubelet[2089]: E0517 00:44:13.694926 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8231cfdb1ec4b548cb8c42dc4784a49,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:13.698468 env[1298]: time="2025-05-17T00:44:13.697615406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:44:13.929515 env[1298]: time="2025-05-17T00:44:13.929308737Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:13.932607 env[1298]: time="2025-05-17T00:44:13.932433735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:13.933796 kubelet[2089]: E0517 00:44:13.933001 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:13.933796 kubelet[2089]: E0517 00:44:13.933076 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:13.933796 kubelet[2089]: E0517 00:44:13.933705 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jhg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-667b98bdd5-df8g2_calico-system(f9ff357f-535d-47da-b193-4f0d8aef3d06): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:13.936100 kubelet[2089]: E0517 00:44:13.936012 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:44:14.353469 env[1298]: time="2025-05-17T00:44:14.353279716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:44:14.553343 env[1298]: time="2025-05-17T00:44:14.553245926Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:14.554047 env[1298]: time="2025-05-17T00:44:14.553985287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:14.554381 kubelet[2089]: E0517 00:44:14.554329 2089 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:44:14.554572 kubelet[2089]: E0517 00:44:14.554552 2089 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:44:14.554837 kubelet[2089]: E0517 00:44:14.554792 2089 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2chb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-w9g2b_calico-system(2195973a-24fa-48aa-9e37-bf651df76422): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:14.556276 kubelet[2089]: E0517 00:44:14.556234 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:44:15.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.96:22-147.75.109.163:36122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:15.099663 systemd[1]: Started sshd@22-137.184.190.96:22-147.75.109.163:36122.service. May 17 00:44:15.101260 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:44:15.102683 kernel: audit: type=1130 audit(1747442655.098:574): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.96:22-147.75.109.163:36122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:15.209000 audit[5389]: USER_ACCT pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.211204 sshd[5389]: Accepted publickey for core from 147.75.109.163 port 36122 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:15.214385 kernel: audit: type=1101 audit(1747442655.209:575): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.215000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.221355 kernel: audit: type=1103 audit(1747442655.215:576): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.224365 kernel: audit: type=1006 audit(1747442655.215:577): pid=5389 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 00:44:15.224835 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:15.215000 audit[5389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe203380e0 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:15.233313 kernel: audit: type=1300 audit(1747442655.215:577): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe203380e0 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:15.215000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:15.237990 kernel: audit: type=1327 audit(1747442655.215:577): proctitle=737368643A20636F7265205B707269765D May 17 00:44:15.245569 systemd-logind[1287]: New session 23 of user core. May 17 00:44:15.246091 systemd[1]: Started session-23.scope. May 17 00:44:15.255000 audit[5389]: USER_START pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.261928 kernel: audit: type=1105 audit(1747442655.255:578): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.262062 kernel: audit: type=1103 audit(1747442655.259:579): pid=5392 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.259000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.835195 sshd[5389]: pam_unix(sshd:session): session closed for user core May 17 00:44:15.835000 audit[5389]: USER_END pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.843360 kernel: audit: type=1106 audit(1747442655.835:580): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.844424 systemd-logind[1287]: Session 23 logged out. Waiting for processes to exit. May 17 00:44:15.844808 systemd[1]: sshd@22-137.184.190.96:22-147.75.109.163:36122.service: Deactivated successfully. May 17 00:44:15.846133 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:44:15.835000 audit[5389]: CRED_DISP pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-137.184.190.96:22-147.75.109.163:36122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:15.851331 kernel: audit: type=1104 audit(1747442655.835:581): pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:15.851630 systemd-logind[1287]: Removed session 23. May 17 00:44:16.840466 update_engine[1288]: I0517 00:44:16.839722 1288 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:44:16.841325 update_engine[1288]: I0517 00:44:16.840498 1288 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:44:16.844279 update_engine[1288]: I0517 00:44:16.844235 1288 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:44:16.845032 update_engine[1288]: I0517 00:44:16.844993 1288 omaha_request_params.cc:62] Current group set to lts May 17 00:44:16.854420 update_engine[1288]: I0517 00:44:16.854372 1288 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:44:16.854420 update_engine[1288]: I0517 00:44:16.854406 1288 update_attempter.cc:643] Scheduling an action processor start. May 17 00:44:16.854907 update_engine[1288]: I0517 00:44:16.854872 1288 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:44:16.857056 update_engine[1288]: I0517 00:44:16.856597 1288 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:44:16.857215 update_engine[1288]: I0517 00:44:16.857192 1288 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 00:44:16.857215 update_engine[1288]: I0517 00:44:16.857211 1288 omaha_request_action.cc:271] Request: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857215 update_engine[1288]: May 17 00:44:16.857959 update_engine[1288]: I0517 00:44:16.857220 1288 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:44:16.873324 update_engine[1288]: I0517 00:44:16.873250 1288 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:44:16.873563 update_engine[1288]: E0517 00:44:16.873476 1288 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:44:16.873642 update_engine[1288]: I0517 00:44:16.873601 1288 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:44:16.895945 locksmithd[1334]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:44:20.843132 systemd[1]: Started sshd@23-137.184.190.96:22-147.75.109.163:38080.service. May 17 00:44:20.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.96:22-147.75.109.163:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:20.844791 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:20.845778 kernel: audit: type=1130 audit(1747442660.843:583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.96:22-147.75.109.163:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:20.978000 audit[5403]: USER_ACCT pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:20.979019 sshd[5403]: Accepted publickey for core from 147.75.109.163 port 38080 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:20.982319 kernel: audit: type=1101 audit(1747442660.978:584): pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:20.983000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:20.987333 kernel: audit: type=1103 audit(1747442660.983:585): pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:20.988315 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:20.990336 kernel: audit: type=1006 audit(1747442660.983:586): pid=5403 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 00:44:20.983000 audit[5403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc598f9360 a2=3 a3=0 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:20.997420 kernel: audit: type=1300 audit(1747442660.983:586): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc598f9360 a2=3 a3=0 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:21.001699 systemd-logind[1287]: New session 24 of user core. May 17 00:44:21.002693 systemd[1]: Started session-24.scope. May 17 00:44:20.983000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:21.011461 kernel: audit: type=1327 audit(1747442660.983:586): proctitle=737368643A20636F7265205B707269765D May 17 00:44:21.015000 audit[5403]: USER_START pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.021381 kernel: audit: type=1105 audit(1747442661.015:587): pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.022000 audit[5406]: CRED_ACQ pid=5406 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.031428 kernel: audit: type=1103 audit(1747442661.022:588): pid=5406 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.679648 sshd[5403]: pam_unix(sshd:session): session closed for user core May 17 00:44:21.680000 audit[5403]: USER_END pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.684897 systemd[1]: sshd@23-137.184.190.96:22-147.75.109.163:38080.service: Deactivated successfully. May 17 00:44:21.685510 kernel: audit: type=1106 audit(1747442661.680:589): pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.686255 systemd-logind[1287]: Session 24 logged out. Waiting for processes to exit. May 17 00:44:21.680000 audit[5403]: CRED_DISP pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.686345 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:44:21.690513 kernel: audit: type=1104 audit(1747442661.680:590): pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:21.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-137.184.190.96:22-147.75.109.163:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:21.691051 systemd-logind[1287]: Removed session 24. May 17 00:44:24.362866 kubelet[2089]: E0517 00:44:24.362785 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:25.360477 kubelet[2089]: E0517 00:44:25.360383 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-667b98bdd5-df8g2" podUID="f9ff357f-535d-47da-b193-4f0d8aef3d06" May 17 00:44:26.352899 kubelet[2089]: E0517 00:44:26.352852 2089 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-w9g2b" podUID="2195973a-24fa-48aa-9e37-bf651df76422" May 17 00:44:26.687453 systemd[1]: Started sshd@24-137.184.190.96:22-147.75.109.163:38086.service. May 17 00:44:26.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-137.184.190.96:22-147.75.109.163:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:26.688789 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:26.689231 kernel: audit: type=1130 audit(1747442666.687:592): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-137.184.190.96:22-147.75.109.163:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:26.784000 audit[5437]: USER_ACCT pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.784688 sshd[5437]: Accepted publickey for core from 147.75.109.163 port 38086 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:26.787000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.791260 kernel: audit: type=1101 audit(1747442666.784:593): pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.791466 kernel: audit: type=1103 audit(1747442666.787:594): pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.794359 kernel: audit: type=1006 audit(1747442666.787:595): pid=5437 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 00:44:26.794559 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:26.787000 audit[5437]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4f902dc0 a2=3 a3=0 items=0 ppid=1 pid=5437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:26.803335 kernel: audit: type=1300 audit(1747442666.787:595): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4f902dc0 a2=3 a3=0 items=0 ppid=1 pid=5437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:26.805379 systemd-logind[1287]: New session 25 of user core. May 17 00:44:26.806231 systemd[1]: Started session-25.scope. May 17 00:44:26.787000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:26.816540 kernel: audit: type=1327 audit(1747442666.787:595): proctitle=737368643A20636F7265205B707269765D May 17 00:44:26.818000 audit[5437]: USER_START pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.823378 kernel: audit: type=1105 audit(1747442666.818:596): pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.824000 audit[5440]: CRED_ACQ pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.829023 update_engine[1288]: I0517 00:44:26.828852 1288 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:44:26.832257 kernel: audit: type=1103 audit(1747442666.824:597): pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:26.834282 update_engine[1288]: I0517 00:44:26.829212 1288 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:44:26.834282 update_engine[1288]: E0517 00:44:26.829340 1288 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:44:26.834282 update_engine[1288]: I0517 00:44:26.829422 1288 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:44:27.274796 sshd[5437]: pam_unix(sshd:session): session closed for user core May 17 00:44:27.276000 audit[5437]: USER_END pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:27.280000 audit[5437]: CRED_DISP pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:27.284052 kernel: audit: type=1106 audit(1747442667.276:598): pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:27.284213 kernel: audit: type=1104 audit(1747442667.280:599): pid=5437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:27.289693 systemd[1]: sshd@24-137.184.190.96:22-147.75.109.163:38086.service: Deactivated successfully. May 17 00:44:27.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-137.184.190.96:22-147.75.109.163:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:27.290781 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:44:27.292056 systemd-logind[1287]: Session 25 logged out. Waiting for processes to exit. May 17 00:44:27.294365 systemd-logind[1287]: Removed session 25. May 17 00:44:32.281076 systemd[1]: Started sshd@25-137.184.190.96:22-147.75.109.163:53968.service. May 17 00:44:32.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-137.184.190.96:22-147.75.109.163:53968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:32.282899 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:32.282993 kernel: audit: type=1130 audit(1747442672.280:601): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-137.184.190.96:22-147.75.109.163:53968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:32.363000 audit[5449]: USER_ACCT pid=5449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.367645 sshd[5449]: Accepted publickey for core from 147.75.109.163 port 53968 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:32.369401 kernel: audit: type=1101 audit(1747442672.363:602): pid=5449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.375000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.384590 kernel: audit: type=1103 audit(1747442672.375:603): pid=5449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.384697 kernel: audit: type=1006 audit(1747442672.375:604): pid=5449 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 17 00:44:32.385126 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:32.375000 audit[5449]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3a4841c0 a2=3 a3=0 items=0 ppid=1 pid=5449 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:32.391501 kernel: audit: type=1300 audit(1747442672.375:604): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3a4841c0 a2=3 a3=0 items=0 ppid=1 pid=5449 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:32.375000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:32.394633 kernel: audit: type=1327 audit(1747442672.375:604): proctitle=737368643A20636F7265205B707269765D May 17 00:44:32.401102 systemd[1]: Started session-26.scope. May 17 00:44:32.402335 systemd-logind[1287]: New session 26 of user core. May 17 00:44:32.425829 kernel: audit: type=1105 audit(1747442672.416:605): pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.425947 kernel: audit: type=1103 audit(1747442672.423:606): pid=5452 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.416000 audit[5449]: USER_START pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.423000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.981139 sshd[5449]: pam_unix(sshd:session): session closed for user core May 17 00:44:32.984000 audit[5449]: USER_END pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.990687 kernel: audit: type=1106 audit(1747442672.984:607): pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.993934 systemd-logind[1287]: Session 26 logged out. Waiting for processes to exit. May 17 00:44:32.994425 systemd[1]: sshd@25-137.184.190.96:22-147.75.109.163:53968.service: Deactivated successfully. May 17 00:44:32.995582 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:44:32.989000 audit[5449]: CRED_DISP pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.997556 systemd-logind[1287]: Removed session 26. May 17 00:44:33.001431 kernel: audit: type=1104 audit(1747442672.989:608): pid=5449 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 00:44:32.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-137.184.190.96:22-147.75.109.163:53968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:36.353939 kubelet[2089]: E0517 00:44:36.353789 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:44:36.356282 kubelet[2089]: E0517 00:44:36.356249 2089 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"