May 17 00:38:58.856449 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:38:58.856471 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.856481 kernel: BIOS-provided physical RAM map: May 17 00:38:58.856488 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:38:58.856494 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:38:58.856501 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:38:58.856509 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:38:58.856516 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:38:58.856524 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:38:58.856530 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:38:58.856537 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:38:58.856544 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:38:58.856550 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:38:58.856557 kernel: NX (Execute Disable) protection: active May 17 00:38:58.856567 kernel: SMBIOS 2.8 present. May 17 00:38:58.856575 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:38:58.856582 kernel: Hypervisor detected: KVM May 17 00:38:58.856589 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:38:58.856596 kernel: kvm-clock: cpu 0, msr 6919a001, primary cpu clock May 17 00:38:58.856603 kernel: kvm-clock: using sched offset of 2560988653 cycles May 17 00:38:58.856611 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:38:58.856619 kernel: tsc: Detected 2794.748 MHz processor May 17 00:38:58.856626 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:38:58.856635 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:38:58.856643 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:38:58.856650 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:38:58.856658 kernel: Using GB pages for direct mapping May 17 00:38:58.856665 kernel: ACPI: Early table checksum verification disabled May 17 00:38:58.856673 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:38:58.856680 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856687 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856695 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856703 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:38:58.856711 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856718 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856726 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856733 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.856741 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:38:58.856748 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:38:58.856756 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:38:58.856768 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:38:58.856785 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:38:58.856794 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:38:58.856802 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:38:58.856810 kernel: No NUMA configuration found May 17 00:38:58.856818 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:38:58.856827 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:38:58.856835 kernel: Zone ranges: May 17 00:38:58.856843 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:38:58.856851 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:38:58.856859 kernel: Normal empty May 17 00:38:58.856866 kernel: Movable zone start for each node May 17 00:38:58.856874 kernel: Early memory node ranges May 17 00:38:58.856882 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:38:58.856890 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:38:58.856898 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:38:58.856907 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:38:58.856915 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:38:58.856923 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:38:58.856931 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:38:58.856939 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:38:58.856947 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:38:58.856955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:38:58.856963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:38:58.856971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:38:58.856980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:38:58.856988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:38:58.856996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:38:58.857004 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:38:58.857012 kernel: TSC deadline timer available May 17 00:38:58.857020 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:38:58.857027 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:38:58.857035 kernel: kvm-guest: setup PV sched yield May 17 00:38:58.857043 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:38:58.857053 kernel: Booting paravirtualized kernel on KVM May 17 00:38:58.857061 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:38:58.857069 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 17 00:38:58.857077 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 17 00:38:58.857085 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 17 00:38:58.857093 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:38:58.857100 kernel: kvm-guest: setup async PF for cpu 0 May 17 00:38:58.857119 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 17 00:38:58.857127 kernel: kvm-guest: PV spinlocks enabled May 17 00:38:58.857137 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:38:58.857145 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:38:58.857153 kernel: Policy zone: DMA32 May 17 00:38:58.857162 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.857171 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:38:58.857179 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:38:58.857187 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:38:58.857195 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:38:58.857205 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) May 17 00:38:58.857213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:38:58.857221 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:38:58.857229 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:38:58.857237 kernel: rcu: Hierarchical RCU implementation. May 17 00:38:58.857246 kernel: rcu: RCU event tracing is enabled. May 17 00:38:58.857254 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:38:58.857262 kernel: Rude variant of Tasks RCU enabled. May 17 00:38:58.857270 kernel: Tracing variant of Tasks RCU enabled. May 17 00:38:58.857279 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:38:58.857288 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:38:58.857295 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:38:58.857303 kernel: random: crng init done May 17 00:38:58.857311 kernel: Console: colour VGA+ 80x25 May 17 00:38:58.857319 kernel: printk: console [ttyS0] enabled May 17 00:38:58.857327 kernel: ACPI: Core revision 20210730 May 17 00:38:58.857335 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:38:58.857343 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:38:58.857353 kernel: x2apic enabled May 17 00:38:58.857361 kernel: Switched APIC routing to physical x2apic. May 17 00:38:58.857368 kernel: kvm-guest: setup PV IPIs May 17 00:38:58.857376 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:38:58.857384 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:38:58.857393 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:38:58.857402 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:38:58.857411 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:38:58.857420 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:38:58.857436 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:38:58.857445 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:38:58.857453 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:38:58.857463 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:38:58.857471 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:38:58.857480 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:38:58.857488 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:38:58.857497 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:38:58.857505 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:38:58.857515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:38:58.857523 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:38:58.857532 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:38:58.857540 kernel: Freeing SMP alternatives memory: 32K May 17 00:38:58.857548 kernel: pid_max: default: 32768 minimum: 301 May 17 00:38:58.857557 kernel: LSM: Security Framework initializing May 17 00:38:58.857564 kernel: SELinux: Initializing. May 17 00:38:58.857573 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:38:58.857582 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:38:58.857591 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:38:58.857599 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:38:58.857608 kernel: ... version: 0 May 17 00:38:58.857616 kernel: ... bit width: 48 May 17 00:38:58.857624 kernel: ... generic registers: 6 May 17 00:38:58.857633 kernel: ... value mask: 0000ffffffffffff May 17 00:38:58.857641 kernel: ... max period: 00007fffffffffff May 17 00:38:58.857649 kernel: ... fixed-purpose events: 0 May 17 00:38:58.857659 kernel: ... event mask: 000000000000003f May 17 00:38:58.857667 kernel: signal: max sigframe size: 1776 May 17 00:38:58.857675 kernel: rcu: Hierarchical SRCU implementation. May 17 00:38:58.857684 kernel: smp: Bringing up secondary CPUs ... May 17 00:38:58.857692 kernel: x86: Booting SMP configuration: May 17 00:38:58.857700 kernel: .... node #0, CPUs: #1 May 17 00:38:58.857709 kernel: kvm-clock: cpu 1, msr 6919a041, secondary cpu clock May 17 00:38:58.857717 kernel: kvm-guest: setup async PF for cpu 1 May 17 00:38:58.857725 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 17 00:38:58.857733 kernel: #2 May 17 00:38:58.857743 kernel: kvm-clock: cpu 2, msr 6919a081, secondary cpu clock May 17 00:38:58.857751 kernel: kvm-guest: setup async PF for cpu 2 May 17 00:38:58.857759 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 17 00:38:58.857768 kernel: #3 May 17 00:38:58.857782 kernel: kvm-clock: cpu 3, msr 6919a0c1, secondary cpu clock May 17 00:38:58.857791 kernel: kvm-guest: setup async PF for cpu 3 May 17 00:38:58.857799 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 17 00:38:58.857807 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:38:58.857815 kernel: smpboot: Max logical packages: 1 May 17 00:38:58.857825 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:38:58.857833 kernel: devtmpfs: initialized May 17 00:38:58.857842 kernel: x86/mm: Memory block size: 128MB May 17 00:38:58.857850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:38:58.857859 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:38:58.857867 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:38:58.857875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:38:58.857884 kernel: audit: initializing netlink subsys (disabled) May 17 00:38:58.857892 kernel: audit: type=2000 audit(1747442338.053:1): state=initialized audit_enabled=0 res=1 May 17 00:38:58.857902 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:38:58.857910 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:38:58.857919 kernel: cpuidle: using governor menu May 17 00:38:58.857927 kernel: ACPI: bus type PCI registered May 17 00:38:58.857935 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:38:58.857943 kernel: dca service started, version 1.12.1 May 17 00:38:58.857952 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:38:58.857961 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 17 00:38:58.857969 kernel: PCI: Using configuration type 1 for base access May 17 00:38:58.857979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:38:58.857987 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:38:58.857996 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:38:58.858004 kernel: ACPI: Added _OSI(Module Device) May 17 00:38:58.858012 kernel: ACPI: Added _OSI(Processor Device) May 17 00:38:58.858021 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:38:58.858029 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:38:58.858037 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:38:58.858045 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:38:58.858055 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:38:58.858064 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:38:58.858072 kernel: ACPI: Interpreter enabled May 17 00:38:58.858080 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:38:58.858088 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:38:58.858097 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:38:58.858126 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:38:58.858134 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:38:58.858264 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:38:58.858350 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:38:58.858440 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:38:58.858453 kernel: PCI host bridge to bus 0000:00 May 17 00:38:58.858557 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:38:58.858639 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:38:58.858710 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:38:58.858794 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:38:58.858868 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:38:58.858941 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:38:58.859012 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:38:58.859119 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:38:58.859214 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:38:58.859297 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:38:58.859399 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:38:58.859489 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:38:58.859566 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:38:58.859660 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:38:58.859742 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:38:58.859838 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:38:58.859920 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:38:58.860010 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:38:58.860092 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:38:58.860187 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:38:58.860266 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:38:58.860356 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:38:58.860440 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:38:58.860523 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:38:58.860602 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:38:58.860680 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:38:58.860766 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:38:58.860859 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:38:58.860945 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:38:58.861024 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:38:58.861119 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:38:58.861207 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:38:58.861287 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:38:58.861298 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:38:58.861306 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:38:58.861315 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:38:58.861323 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:38:58.861332 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:38:58.861343 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:38:58.861351 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:38:58.861362 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:38:58.861371 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:38:58.861381 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:38:58.861389 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:38:58.861398 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:38:58.861406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:38:58.861415 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:38:58.861424 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:38:58.861432 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:38:58.861441 kernel: iommu: Default domain type: Translated May 17 00:38:58.861449 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:38:58.861567 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:38:58.861651 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:38:58.863863 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:38:58.863881 kernel: vgaarb: loaded May 17 00:38:58.863895 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:38:58.863920 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:38:58.863930 kernel: PTP clock support registered May 17 00:38:58.863940 kernel: PCI: Using ACPI for IRQ routing May 17 00:38:58.863950 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:38:58.863959 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:38:58.863968 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:38:58.863977 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:38:58.863994 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:38:58.864010 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:38:58.864018 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:38:58.864028 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:38:58.864036 kernel: pnp: PnP ACPI init May 17 00:38:58.864206 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:38:58.864236 kernel: pnp: PnP ACPI: found 6 devices May 17 00:38:58.864245 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:38:58.864254 kernel: NET: Registered PF_INET protocol family May 17 00:38:58.864266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:38:58.864287 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:38:58.864297 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:38:58.864305 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:38:58.864314 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:38:58.864323 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:38:58.864344 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:38:58.864352 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:38:58.864361 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:38:58.864372 kernel: NET: Registered PF_XDP protocol family May 17 00:38:58.864506 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:38:58.864649 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:38:58.864772 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:38:58.869624 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:38:58.869696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:38:58.869756 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:38:58.869765 kernel: PCI: CLS 0 bytes, default 64 May 17 00:38:58.869785 kernel: Initialise system trusted keyrings May 17 00:38:58.869793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:38:58.869801 kernel: Key type asymmetric registered May 17 00:38:58.869808 kernel: Asymmetric key parser 'x509' registered May 17 00:38:58.869815 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:38:58.869822 kernel: io scheduler mq-deadline registered May 17 00:38:58.869830 kernel: io scheduler kyber registered May 17 00:38:58.869839 kernel: io scheduler bfq registered May 17 00:38:58.869849 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:38:58.869858 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:38:58.869870 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:38:58.869880 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:38:58.869889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:38:58.869898 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:38:58.869908 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:38:58.869917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:38:58.869926 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:38:58.870017 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:38:58.870033 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:38:58.870128 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:38:58.870208 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:38:58 UTC (1747442338) May 17 00:38:58.870285 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:38:58.870296 kernel: NET: Registered PF_INET6 protocol family May 17 00:38:58.870306 kernel: Segment Routing with IPv6 May 17 00:38:58.870314 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:38:58.870323 kernel: NET: Registered PF_PACKET protocol family May 17 00:38:58.870333 kernel: Key type dns_resolver registered May 17 00:38:58.870344 kernel: IPI shorthand broadcast: enabled May 17 00:38:58.870352 kernel: sched_clock: Marking stable (421015579, 103351481)->(581915707, -57548647) May 17 00:38:58.870360 kernel: registered taskstats version 1 May 17 00:38:58.870367 kernel: Loading compiled-in X.509 certificates May 17 00:38:58.870374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:38:58.870381 kernel: Key type .fscrypt registered May 17 00:38:58.870388 kernel: Key type fscrypt-provisioning registered May 17 00:38:58.870395 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:38:58.870404 kernel: ima: Allocated hash algorithm: sha1 May 17 00:38:58.870411 kernel: ima: No architecture policies found May 17 00:38:58.870418 kernel: clk: Disabling unused clocks May 17 00:38:58.870425 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:38:58.870432 kernel: Write protecting the kernel read-only data: 28672k May 17 00:38:58.870439 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:38:58.870446 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:38:58.870453 kernel: Run /init as init process May 17 00:38:58.870460 kernel: with arguments: May 17 00:38:58.870469 kernel: /init May 17 00:38:58.870476 kernel: with environment: May 17 00:38:58.870483 kernel: HOME=/ May 17 00:38:58.870489 kernel: TERM=linux May 17 00:38:58.870496 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:38:58.870506 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:38:58.870515 systemd[1]: Detected virtualization kvm. May 17 00:38:58.870523 systemd[1]: Detected architecture x86-64. May 17 00:38:58.870532 systemd[1]: Running in initrd. May 17 00:38:58.870539 systemd[1]: No hostname configured, using default hostname. May 17 00:38:58.870546 systemd[1]: Hostname set to . May 17 00:38:58.870554 systemd[1]: Initializing machine ID from VM UUID. May 17 00:38:58.870561 systemd[1]: Queued start job for default target initrd.target. May 17 00:38:58.870569 systemd[1]: Started systemd-ask-password-console.path. May 17 00:38:58.870576 systemd[1]: Reached target cryptsetup.target. May 17 00:38:58.870583 systemd[1]: Reached target paths.target. May 17 00:38:58.870592 systemd[1]: Reached target slices.target. May 17 00:38:58.870609 systemd[1]: Reached target swap.target. May 17 00:38:58.870621 systemd[1]: Reached target timers.target. May 17 00:38:58.870631 systemd[1]: Listening on iscsid.socket. May 17 00:38:58.870638 systemd[1]: Listening on iscsiuio.socket. May 17 00:38:58.870646 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:38:58.870656 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:38:58.870663 systemd[1]: Listening on systemd-journald.socket. May 17 00:38:58.870671 systemd[1]: Listening on systemd-networkd.socket. May 17 00:38:58.870679 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:38:58.870687 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:38:58.870694 systemd[1]: Reached target sockets.target. May 17 00:38:58.870702 systemd[1]: Starting kmod-static-nodes.service... May 17 00:38:58.870709 systemd[1]: Finished network-cleanup.service. May 17 00:38:58.870717 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:38:58.870726 systemd[1]: Starting systemd-journald.service... May 17 00:38:58.870734 systemd[1]: Starting systemd-modules-load.service... May 17 00:38:58.870741 systemd[1]: Starting systemd-resolved.service... May 17 00:38:58.870749 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:38:58.870757 systemd[1]: Finished kmod-static-nodes.service. May 17 00:38:58.870764 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:38:58.870772 kernel: audit: type=1130 audit(1747442338.861:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.870790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:38:58.870803 systemd-journald[196]: Journal started May 17 00:38:58.870843 systemd-journald[196]: Runtime Journal (/run/log/journal/64e8ce4e09e3449581fb7f18eb032d1c) is 6.0M, max 48.5M, 42.5M free. May 17 00:38:58.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.861801 systemd-modules-load[197]: Inserted module 'overlay' May 17 00:38:58.872852 systemd-resolved[198]: Positive Trust Anchors: May 17 00:38:58.900369 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:38:58.900391 systemd[1]: Started systemd-journald.service. May 17 00:38:58.872861 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:38:58.872887 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:38:58.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.875011 systemd-resolved[198]: Defaulting to hostname 'linux'. May 17 00:38:58.904285 systemd[1]: Started systemd-resolved.service. May 17 00:38:58.914631 kernel: audit: type=1130 audit(1747442338.902:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.914650 kernel: audit: type=1130 audit(1747442338.905:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.906222 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:38:58.923993 kernel: Bridge firewalling registered May 17 00:38:58.924007 kernel: audit: type=1130 audit(1747442338.906:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.924021 kernel: audit: type=1130 audit(1747442338.922:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.906845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:38:58.922397 systemd-modules-load[197]: Inserted module 'br_netfilter' May 17 00:38:58.922876 systemd[1]: Reached target nss-lookup.target. May 17 00:38:58.929045 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:38:58.943960 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:38:58.950186 kernel: audit: type=1130 audit(1747442338.944:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.950202 kernel: SCSI subsystem initialized May 17 00:38:58.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.945034 systemd[1]: Starting dracut-cmdline.service... May 17 00:38:58.956124 dracut-cmdline[216]: dracut-dracut-053 May 17 00:38:58.958737 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.966817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:38:58.966867 kernel: device-mapper: uevent: version 1.0.3 May 17 00:38:58.966879 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:38:58.974322 systemd-modules-load[197]: Inserted module 'dm_multipath' May 17 00:38:58.975184 systemd[1]: Finished systemd-modules-load.service. May 17 00:38:58.980405 kernel: audit: type=1130 audit(1747442338.975:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.976718 systemd[1]: Starting systemd-sysctl.service... May 17 00:38:58.986098 systemd[1]: Finished systemd-sysctl.service. May 17 00:38:58.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.991126 kernel: audit: type=1130 audit(1747442338.987:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.029130 kernel: Loading iSCSI transport class v2.0-870. May 17 00:38:59.045140 kernel: iscsi: registered transport (tcp) May 17 00:38:59.065622 kernel: iscsi: registered transport (qla4xxx) May 17 00:38:59.065659 kernel: QLogic iSCSI HBA Driver May 17 00:38:59.092501 systemd[1]: Finished dracut-cmdline.service. May 17 00:38:59.097023 kernel: audit: type=1130 audit(1747442339.092:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.093846 systemd[1]: Starting dracut-pre-udev.service... May 17 00:38:59.182141 kernel: raid6: avx2x4 gen() 21451 MB/s May 17 00:38:59.199127 kernel: raid6: avx2x4 xor() 7059 MB/s May 17 00:38:59.216131 kernel: raid6: avx2x2 gen() 31208 MB/s May 17 00:38:59.233130 kernel: raid6: avx2x2 xor() 19215 MB/s May 17 00:38:59.250134 kernel: raid6: avx2x1 gen() 26526 MB/s May 17 00:38:59.267144 kernel: raid6: avx2x1 xor() 15344 MB/s May 17 00:38:59.292127 kernel: raid6: sse2x4 gen() 14509 MB/s May 17 00:38:59.309128 kernel: raid6: sse2x4 xor() 7350 MB/s May 17 00:38:59.326131 kernel: raid6: sse2x2 gen() 15814 MB/s May 17 00:38:59.343129 kernel: raid6: sse2x2 xor() 9508 MB/s May 17 00:38:59.360125 kernel: raid6: sse2x1 gen() 12219 MB/s May 17 00:38:59.377572 kernel: raid6: sse2x1 xor() 7597 MB/s May 17 00:38:59.377593 kernel: raid6: using algorithm avx2x2 gen() 31208 MB/s May 17 00:38:59.377602 kernel: raid6: .... xor() 19215 MB/s, rmw enabled May 17 00:38:59.378300 kernel: raid6: using avx2x2 recovery algorithm May 17 00:38:59.391136 kernel: xor: automatically using best checksumming function avx May 17 00:38:59.503154 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:38:59.513355 systemd[1]: Finished dracut-pre-udev.service. May 17 00:38:59.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.515000 audit: BPF prog-id=7 op=LOAD May 17 00:38:59.515000 audit: BPF prog-id=8 op=LOAD May 17 00:38:59.516499 systemd[1]: Starting systemd-udevd.service... May 17 00:38:59.528689 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 17 00:38:59.532543 systemd[1]: Started systemd-udevd.service. May 17 00:38:59.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.535674 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:38:59.544712 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 17 00:38:59.568326 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:38:59.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.570195 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:38:59.601010 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:38:59.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.633634 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:38:59.657812 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:38:59.657833 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:38:59.657845 kernel: GPT:9289727 != 19775487 May 17 00:38:59.657857 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:38:59.657869 kernel: GPT:9289727 != 19775487 May 17 00:38:59.657880 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:38:59.657891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.657907 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:38:59.657919 kernel: AES CTR mode by8 optimization enabled May 17 00:38:59.657930 kernel: libata version 3.00 loaded. May 17 00:38:59.670152 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:38:59.695387 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:38:59.695407 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:38:59.695512 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:38:59.695597 kernel: scsi host0: ahci May 17 00:38:59.695708 kernel: scsi host1: ahci May 17 00:38:59.695822 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) May 17 00:38:59.695833 kernel: scsi host2: ahci May 17 00:38:59.695929 kernel: scsi host3: ahci May 17 00:38:59.696021 kernel: scsi host4: ahci May 17 00:38:59.696136 kernel: scsi host5: ahci May 17 00:38:59.696232 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:38:59.696244 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:38:59.696255 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:38:59.696265 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:38:59.696275 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:38:59.696286 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:38:59.689414 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:38:59.737038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:38:59.744391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:38:59.749329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:38:59.760388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:38:59.762256 systemd[1]: Starting disk-uuid.service... May 17 00:38:59.772193 disk-uuid[538]: Primary Header is updated. May 17 00:38:59.772193 disk-uuid[538]: Secondary Entries is updated. May 17 00:38:59.772193 disk-uuid[538]: Secondary Header is updated. May 17 00:38:59.775923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.779131 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.783145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:39:00.006123 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:39:00.006185 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:39:00.006195 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:39:00.006203 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:39:00.006212 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:39:00.014142 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:39:00.015375 kernel: ata3.00: applying bridge limits May 17 00:39:00.016129 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:39:00.017129 kernel: ata3.00: configured for UDMA/100 May 17 00:39:00.019133 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:39:00.046122 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:39:00.062675 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:39:00.062690 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:39:00.784845 disk-uuid[539]: The operation has completed successfully. May 17 00:39:00.786120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:39:00.805000 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:39:00.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.805097 systemd[1]: Finished disk-uuid.service. May 17 00:39:00.812068 systemd[1]: Starting verity-setup.service... May 17 00:39:00.824157 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:39:00.842624 systemd[1]: Found device dev-mapper-usr.device. May 17 00:39:00.844891 systemd[1]: Mounting sysusr-usr.mount... May 17 00:39:00.846895 systemd[1]: Finished verity-setup.service. May 17 00:39:00.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.907757 systemd[1]: Mounted sysusr-usr.mount. May 17 00:39:00.909230 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:39:00.908504 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:39:00.909207 systemd[1]: Starting ignition-setup.service... May 17 00:39:00.911292 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:39:00.918633 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:00.918664 kernel: BTRFS info (device vda6): using free space tree May 17 00:39:00.918674 kernel: BTRFS info (device vda6): has skinny extents May 17 00:39:00.926447 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:39:00.943370 systemd[1]: Finished ignition-setup.service. May 17 00:39:00.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.945321 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:39:00.981955 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:39:00.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.983000 audit: BPF prog-id=9 op=LOAD May 17 00:39:00.984596 systemd[1]: Starting systemd-networkd.service... May 17 00:39:00.993148 ignition[651]: Ignition 2.14.0 May 17 00:39:00.993162 ignition[651]: Stage: fetch-offline May 17 00:39:00.993221 ignition[651]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:00.993231 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:00.993413 ignition[651]: parsed url from cmdline: "" May 17 00:39:00.993417 ignition[651]: no config URL provided May 17 00:39:00.993423 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:39:00.993432 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 17 00:39:00.993453 ignition[651]: op(1): [started] loading QEMU firmware config module May 17 00:39:00.993462 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:39:00.997333 ignition[651]: op(1): [finished] loading QEMU firmware config module May 17 00:39:00.997354 ignition[651]: QEMU firmware config was not found. Ignoring... May 17 00:39:01.006026 systemd-networkd[718]: lo: Link UP May 17 00:39:01.006034 systemd-networkd[718]: lo: Gained carrier May 17 00:39:01.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.006419 systemd-networkd[718]: Enumeration completed May 17 00:39:01.006630 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:01.006820 systemd[1]: Started systemd-networkd.service. May 17 00:39:01.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.008263 systemd-networkd[718]: eth0: Link UP May 17 00:39:01.008266 systemd-networkd[718]: eth0: Gained carrier May 17 00:39:01.008554 systemd[1]: Reached target network.target. May 17 00:39:01.020086 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:01.020086 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:39:01.020086 iscsid[725]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:39:01.020086 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:39:01.020086 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:39:01.020086 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:01.020086 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:39:01.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.011379 systemd[1]: Starting iscsiuio.service... May 17 00:39:01.015052 systemd[1]: Started iscsiuio.service. May 17 00:39:01.016921 systemd[1]: Starting iscsid.service... May 17 00:39:01.020587 systemd[1]: Started iscsid.service. May 17 00:39:01.023893 systemd[1]: Starting dracut-initqueue.service... May 17 00:39:01.036914 systemd[1]: Finished dracut-initqueue.service. May 17 00:39:01.040946 systemd[1]: Reached target remote-fs-pre.target. May 17 00:39:01.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.043260 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:01.045866 systemd[1]: Reached target remote-fs.target. May 17 00:39:01.049646 systemd[1]: Starting dracut-pre-mount.service... May 17 00:39:01.058431 systemd[1]: Finished dracut-pre-mount.service. May 17 00:39:01.094381 ignition[651]: parsing config with SHA512: 03001cdd9710ebc1db380f72327497abac09f7bb5b8a31f44dd634d7446bd210abf59ac2915a9e4acec6576c1eb3491f87056abe48a8007c88bc866e71092775 May 17 00:39:01.102226 unknown[651]: fetched base config from "system" May 17 00:39:01.102238 unknown[651]: fetched user config from "qemu" May 17 00:39:01.102748 ignition[651]: fetch-offline: fetch-offline passed May 17 00:39:01.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.103770 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:39:01.102796 ignition[651]: Ignition finished successfully May 17 00:39:01.105256 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:39:01.105601 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:39:01.106317 systemd[1]: Starting ignition-kargs.service... May 17 00:39:01.115205 ignition[739]: Ignition 2.14.0 May 17 00:39:01.115216 ignition[739]: Stage: kargs May 17 00:39:01.115303 ignition[739]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.115313 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.117693 systemd[1]: Finished ignition-kargs.service. May 17 00:39:01.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.116315 ignition[739]: kargs: kargs passed May 17 00:39:01.120178 systemd[1]: Starting ignition-disks.service... May 17 00:39:01.116350 ignition[739]: Ignition finished successfully May 17 00:39:01.126367 ignition[745]: Ignition 2.14.0 May 17 00:39:01.126378 ignition[745]: Stage: disks May 17 00:39:01.126459 ignition[745]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.127904 systemd[1]: Finished ignition-disks.service. May 17 00:39:01.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.126467 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.129606 systemd[1]: Reached target initrd-root-device.target. May 17 00:39:01.127321 ignition[745]: disks: disks passed May 17 00:39:01.131256 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:01.127354 ignition[745]: Ignition finished successfully May 17 00:39:01.131659 systemd[1]: Reached target local-fs.target. May 17 00:39:01.131837 systemd[1]: Reached target sysinit.target. May 17 00:39:01.132000 systemd[1]: Reached target basic.target. May 17 00:39:01.132979 systemd[1]: Starting systemd-fsck-root.service... May 17 00:39:01.145803 systemd-fsck[753]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:39:01.153955 systemd[1]: Finished systemd-fsck-root.service. May 17 00:39:01.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.156413 systemd[1]: Mounting sysroot.mount... May 17 00:39:01.164058 systemd[1]: Mounted sysroot.mount. May 17 00:39:01.165558 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:39:01.164620 systemd[1]: Reached target initrd-root-fs.target. May 17 00:39:01.167395 systemd[1]: Mounting sysroot-usr.mount... May 17 00:39:01.168209 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:39:01.168239 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:39:01.168257 systemd[1]: Reached target ignition-diskful.target. May 17 00:39:01.170055 systemd[1]: Mounted sysroot-usr.mount. May 17 00:39:01.172252 systemd[1]: Starting initrd-setup-root.service... May 17 00:39:01.178160 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:39:01.182567 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory May 17 00:39:01.186446 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:39:01.190227 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:39:01.214557 systemd[1]: Finished initrd-setup-root.service. May 17 00:39:01.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.216282 systemd[1]: Starting ignition-mount.service... May 17 00:39:01.217296 systemd[1]: Starting sysroot-boot.service... May 17 00:39:01.224519 bash[805]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:39:01.232861 systemd[1]: Finished sysroot-boot.service. May 17 00:39:01.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.234554 ignition[806]: INFO : Ignition 2.14.0 May 17 00:39:01.234554 ignition[806]: INFO : Stage: mount May 17 00:39:01.234554 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.234554 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.234554 ignition[806]: INFO : mount: mount passed May 17 00:39:01.234554 ignition[806]: INFO : Ignition finished successfully May 17 00:39:01.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.237249 systemd[1]: Finished ignition-mount.service. May 17 00:39:01.855561 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:39:01.861124 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) May 17 00:39:01.863493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:01.863514 kernel: BTRFS info (device vda6): using free space tree May 17 00:39:01.863523 kernel: BTRFS info (device vda6): has skinny extents May 17 00:39:01.867463 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:39:01.869865 systemd[1]: Starting ignition-files.service... May 17 00:39:01.882887 ignition[834]: INFO : Ignition 2.14.0 May 17 00:39:01.882887 ignition[834]: INFO : Stage: files May 17 00:39:01.884681 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.884681 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.884681 ignition[834]: DEBUG : files: compiled without relabeling support, skipping May 17 00:39:01.888302 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:39:01.888302 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:39:01.888302 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:39:01.888302 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:39:01.888302 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:39:01.888302 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:39:01.888302 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:39:01.888302 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:39:01.888302 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:39:01.887575 unknown[834]: wrote ssh authorized keys file for user: core May 17 00:39:01.935086 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:39:02.168456 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:39:02.168456 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:02.172499 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:39:02.187240 systemd-networkd[718]: eth0: Gained IPv6LL May 17 00:39:02.859405 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:39:03.201376 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:03.201376 ignition[834]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:39:03.205779 ignition[834]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:39:03.205779 ignition[834]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:39:03.205779 ignition[834]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:39:03.242476 ignition[834]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:39:03.244258 ignition[834]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:39:03.244258 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:03.247713 ignition[834]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:03.247713 ignition[834]: INFO : files: files passed May 17 00:39:03.247713 ignition[834]: INFO : Ignition finished successfully May 17 00:39:03.252364 systemd[1]: Finished ignition-files.service. May 17 00:39:03.257585 kernel: kauditd_printk_skb: 24 callbacks suppressed May 17 00:39:03.257605 kernel: audit: type=1130 audit(1747442343.252:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.253687 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:39:03.257765 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:39:03.262770 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 17 00:39:03.270764 kernel: audit: type=1130 audit(1747442343.263:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.270790 kernel: audit: type=1131 audit(1747442343.263:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.258355 systemd[1]: Starting ignition-quench.service... May 17 00:39:03.260888 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:39:03.260968 systemd[1]: Finished ignition-quench.service. May 17 00:39:03.274740 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:39:03.276572 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:39:03.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.278699 systemd[1]: Reached target ignition-complete.target. May 17 00:39:03.283337 kernel: audit: type=1130 audit(1747442343.278:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.283434 systemd[1]: Starting initrd-parse-etc.service... May 17 00:39:03.294153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:39:03.294241 systemd[1]: Finished initrd-parse-etc.service. May 17 00:39:03.302370 kernel: audit: type=1130 audit(1747442343.296:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.302387 kernel: audit: type=1131 audit(1747442343.296:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.296239 systemd[1]: Reached target initrd-fs.target. May 17 00:39:03.302929 systemd[1]: Reached target initrd.target. May 17 00:39:03.304415 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:39:03.305036 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:39:03.313798 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:39:03.318394 kernel: audit: type=1130 audit(1747442343.313:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.318442 systemd[1]: Starting initrd-cleanup.service... May 17 00:39:03.328834 systemd[1]: Stopped target nss-lookup.target. May 17 00:39:03.329356 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:39:03.331236 systemd[1]: Stopped target timers.target. May 17 00:39:03.332712 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:39:03.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.332815 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:39:03.339313 kernel: audit: type=1131 audit(1747442343.334:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.334206 systemd[1]: Stopped target initrd.target. May 17 00:39:03.337851 systemd[1]: Stopped target basic.target. May 17 00:39:03.339808 systemd[1]: Stopped target ignition-complete.target. May 17 00:39:03.341573 systemd[1]: Stopped target ignition-diskful.target. May 17 00:39:03.343078 systemd[1]: Stopped target initrd-root-device.target. May 17 00:39:03.343475 systemd[1]: Stopped target remote-fs.target. May 17 00:39:03.345869 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:39:03.347415 systemd[1]: Stopped target sysinit.target. May 17 00:39:03.349532 systemd[1]: Stopped target local-fs.target. May 17 00:39:03.351062 systemd[1]: Stopped target local-fs-pre.target. May 17 00:39:03.358879 kernel: audit: type=1131 audit(1747442343.354:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.351390 systemd[1]: Stopped target swap.target. May 17 00:39:03.353149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:39:03.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.353236 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:39:03.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.354691 systemd[1]: Stopped target cryptsetup.target. May 17 00:39:03.368286 kernel: audit: type=1131 audit(1747442343.360:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.359164 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:39:03.359243 systemd[1]: Stopped dracut-initqueue.service. May 17 00:39:03.360903 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:39:03.360981 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:39:03.364652 systemd[1]: Stopped target paths.target. May 17 00:39:03.367508 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:39:03.369337 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:39:03.375221 systemd[1]: Stopped target slices.target. May 17 00:39:03.376911 systemd[1]: Stopped target sockets.target. May 17 00:39:03.378664 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:39:03.379637 systemd[1]: Closed iscsid.socket. May 17 00:39:03.381284 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:39:03.382252 systemd[1]: Closed iscsiuio.socket. May 17 00:39:03.383835 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:39:03.385198 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:39:03.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.387418 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:39:03.388528 systemd[1]: Stopped ignition-files.service. May 17 00:39:03.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.391142 systemd[1]: Stopping ignition-mount.service... May 17 00:39:03.393414 systemd[1]: Stopping sysroot-boot.service... May 17 00:39:03.395013 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:39:03.396268 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:39:03.398090 ignition[875]: INFO : Ignition 2.14.0 May 17 00:39:03.398090 ignition[875]: INFO : Stage: umount May 17 00:39:03.398090 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:03.398090 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:03.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.398273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:39:03.404578 ignition[875]: INFO : umount: umount passed May 17 00:39:03.404578 ignition[875]: INFO : Ignition finished successfully May 17 00:39:03.401145 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:39:03.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.409510 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:39:03.410959 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:39:03.411998 systemd[1]: Stopped ignition-mount.service. May 17 00:39:03.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.414087 systemd[1]: Stopped target network.target. May 17 00:39:03.415774 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:39:03.415822 systemd[1]: Stopped ignition-disks.service. May 17 00:39:03.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.418553 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:39:03.418595 systemd[1]: Stopped ignition-kargs.service. May 17 00:39:03.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.421291 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:39:03.421335 systemd[1]: Stopped ignition-setup.service. May 17 00:39:03.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.424087 systemd[1]: Stopping systemd-networkd.service... May 17 00:39:03.425932 systemd[1]: Stopping systemd-resolved.service... May 17 00:39:03.427804 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:39:03.428842 systemd[1]: Finished initrd-cleanup.service. May 17 00:39:03.429149 systemd-networkd[718]: eth0: DHCPv6 lease lost May 17 00:39:03.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.433664 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:39:03.434886 systemd[1]: Stopped systemd-networkd.service. May 17 00:39:03.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.437791 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:39:03.438868 systemd[1]: Stopped systemd-resolved.service. May 17 00:39:03.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.441000 audit: BPF prog-id=9 op=UNLOAD May 17 00:39:03.441222 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:39:03.441259 systemd[1]: Closed systemd-networkd.socket. May 17 00:39:03.442000 audit: BPF prog-id=6 op=UNLOAD May 17 00:39:03.444969 systemd[1]: Stopping network-cleanup.service... May 17 00:39:03.446658 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:39:03.446708 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:39:03.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.449836 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:39:03.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.449875 systemd[1]: Stopped systemd-sysctl.service. May 17 00:39:03.452677 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:39:03.452722 systemd[1]: Stopped systemd-modules-load.service. May 17 00:39:03.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.455457 systemd[1]: Stopping systemd-udevd.service... May 17 00:39:03.458251 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:39:03.461635 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:39:03.462725 systemd[1]: Stopped systemd-udevd.service. May 17 00:39:03.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.464721 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:39:03.465694 systemd[1]: Stopped network-cleanup.service. May 17 00:39:03.467382 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:39:03.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.467414 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:39:03.470241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:39:03.470268 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:39:03.472844 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:39:03.473763 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:39:03.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.475298 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:39:03.475327 systemd[1]: Stopped dracut-cmdline.service. May 17 00:39:03.477903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:39:03.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.477935 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:39:03.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.481013 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:39:03.483097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:39:03.484351 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:39:03.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.486595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:39:03.487560 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:39:03.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.489148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:39:03.489188 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:39:03.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.492612 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:39:03.494396 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:39:03.495480 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:39:03.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.504780 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:39:03.505773 systemd[1]: Stopped sysroot-boot.service. May 17 00:39:03.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.507329 systemd[1]: Reached target initrd-switch-root.target. May 17 00:39:03.509148 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:39:03.510157 systemd[1]: Stopped initrd-setup-root.service. May 17 00:39:03.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.512380 systemd[1]: Starting initrd-switch-root.service... May 17 00:39:03.518201 systemd[1]: Switching root. May 17 00:39:03.520000 audit: BPF prog-id=5 op=UNLOAD May 17 00:39:03.520000 audit: BPF prog-id=4 op=UNLOAD May 17 00:39:03.520000 audit: BPF prog-id=3 op=UNLOAD May 17 00:39:03.521000 audit: BPF prog-id=8 op=UNLOAD May 17 00:39:03.521000 audit: BPF prog-id=7 op=UNLOAD May 17 00:39:03.537431 iscsid[725]: iscsid shutting down. May 17 00:39:03.538341 systemd-journald[196]: Received SIGTERM from PID 1 (n/a). May 17 00:39:03.538406 systemd-journald[196]: Journal stopped May 17 00:39:06.482294 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:39:06.482350 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:39:06.482363 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:39:06.482373 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:39:06.482385 kernel: SELinux: policy capability open_perms=1 May 17 00:39:06.482394 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:39:06.482406 kernel: SELinux: policy capability always_check_network=0 May 17 00:39:06.482416 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:39:06.482430 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:39:06.482439 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:39:06.482449 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:39:06.482461 systemd[1]: Successfully loaded SELinux policy in 42.235ms. May 17 00:39:06.482476 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.720ms. May 17 00:39:06.482489 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:39:06.482503 systemd[1]: Detected virtualization kvm. May 17 00:39:06.482515 systemd[1]: Detected architecture x86-64. May 17 00:39:06.482526 systemd[1]: Detected first boot. May 17 00:39:06.482536 systemd[1]: Initializing machine ID from VM UUID. May 17 00:39:06.482547 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:39:06.482569 systemd[1]: Populated /etc with preset unit settings. May 17 00:39:06.482583 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:06.482594 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:06.482605 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:06.482621 systemd[1]: Queued start job for default target multi-user.target. May 17 00:39:06.482641 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:39:06.482658 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:39:06.482670 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:39:06.482684 systemd[1]: Created slice system-getty.slice. May 17 00:39:06.482697 systemd[1]: Created slice system-modprobe.slice. May 17 00:39:06.482710 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:39:06.482722 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:39:06.482734 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:39:06.482747 systemd[1]: Created slice user.slice. May 17 00:39:06.482760 systemd[1]: Started systemd-ask-password-console.path. May 17 00:39:06.482779 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:39:06.482792 systemd[1]: Set up automount boot.automount. May 17 00:39:06.482807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:39:06.482820 systemd[1]: Reached target integritysetup.target. May 17 00:39:06.482833 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:06.482844 systemd[1]: Reached target remote-fs.target. May 17 00:39:06.482854 systemd[1]: Reached target slices.target. May 17 00:39:06.482864 systemd[1]: Reached target swap.target. May 17 00:39:06.482873 systemd[1]: Reached target torcx.target. May 17 00:39:06.482883 systemd[1]: Reached target veritysetup.target. May 17 00:39:06.482895 systemd[1]: Listening on systemd-coredump.socket. May 17 00:39:06.482906 systemd[1]: Listening on systemd-initctl.socket. May 17 00:39:06.482916 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:39:06.482926 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:39:06.482936 systemd[1]: Listening on systemd-journald.socket. May 17 00:39:06.482946 systemd[1]: Listening on systemd-networkd.socket. May 17 00:39:06.482956 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:39:06.482966 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:39:06.482976 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:39:06.482986 systemd[1]: Mounting dev-hugepages.mount... May 17 00:39:06.482998 systemd[1]: Mounting dev-mqueue.mount... May 17 00:39:06.483021 systemd[1]: Mounting media.mount... May 17 00:39:06.483037 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:06.483047 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:39:06.483056 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:39:06.483066 systemd[1]: Mounting tmp.mount... May 17 00:39:06.483076 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:39:06.483086 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:06.483096 systemd[1]: Starting kmod-static-nodes.service... May 17 00:39:06.483135 systemd[1]: Starting modprobe@configfs.service... May 17 00:39:06.483146 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:06.483157 systemd[1]: Starting modprobe@drm.service... May 17 00:39:06.483167 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:06.483177 systemd[1]: Starting modprobe@fuse.service... May 17 00:39:06.483187 systemd[1]: Starting modprobe@loop.service... May 17 00:39:06.483197 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:39:06.483208 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:39:06.483217 kernel: loop: module loaded May 17 00:39:06.483229 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:39:06.483240 systemd[1]: Starting systemd-journald.service... May 17 00:39:06.483250 kernel: fuse: init (API version 7.34) May 17 00:39:06.483260 systemd[1]: Starting systemd-modules-load.service... May 17 00:39:06.483272 systemd[1]: Starting systemd-network-generator.service... May 17 00:39:06.483282 systemd[1]: Starting systemd-remount-fs.service... May 17 00:39:06.483292 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:39:06.483303 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:06.483315 systemd-journald[1019]: Journal started May 17 00:39:06.483356 systemd-journald[1019]: Runtime Journal (/run/log/journal/64e8ce4e09e3449581fb7f18eb032d1c) is 6.0M, max 48.5M, 42.5M free. May 17 00:39:06.397000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:39:06.397000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:39:06.479000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:39:06.479000 audit[1019]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffed82852a0 a2=4000 a3=7ffed828533c items=0 ppid=1 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:06.479000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:39:06.488383 systemd[1]: Started systemd-journald.service. May 17 00:39:06.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.489600 systemd[1]: Mounted dev-hugepages.mount. May 17 00:39:06.490575 systemd[1]: Mounted dev-mqueue.mount. May 17 00:39:06.491458 systemd[1]: Mounted media.mount. May 17 00:39:06.492354 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:39:06.493332 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:39:06.494339 systemd[1]: Mounted tmp.mount. May 17 00:39:06.495599 systemd[1]: Finished kmod-static-nodes.service. May 17 00:39:06.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.496824 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:39:06.497058 systemd[1]: Finished modprobe@configfs.service. May 17 00:39:06.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.498280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:06.498486 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:06.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.499648 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:06.499874 systemd[1]: Finished modprobe@drm.service. May 17 00:39:06.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.501021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:06.501429 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:06.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.502837 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:39:06.503072 systemd[1]: Finished modprobe@fuse.service. May 17 00:39:06.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.504605 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:06.504931 systemd[1]: Finished modprobe@loop.service. May 17 00:39:06.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.506538 systemd[1]: Finished systemd-modules-load.service. May 17 00:39:06.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.508000 systemd[1]: Finished systemd-network-generator.service. May 17 00:39:06.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.509428 systemd[1]: Finished systemd-remount-fs.service. May 17 00:39:06.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.510792 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:39:06.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.512172 systemd[1]: Reached target network-pre.target. May 17 00:39:06.514576 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:39:06.516602 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:39:06.517524 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:39:06.519314 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:39:06.521338 systemd[1]: Starting systemd-journal-flush.service... May 17 00:39:06.522334 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:06.523653 systemd[1]: Starting systemd-random-seed.service... May 17 00:39:06.524668 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:06.525898 systemd[1]: Starting systemd-sysctl.service... May 17 00:39:06.528231 systemd-journald[1019]: Time spent on flushing to /var/log/journal/64e8ce4e09e3449581fb7f18eb032d1c is 13.350ms for 1035 entries. May 17 00:39:06.528231 systemd-journald[1019]: System Journal (/var/log/journal/64e8ce4e09e3449581fb7f18eb032d1c) is 8.0M, max 195.6M, 187.6M free. May 17 00:39:06.891510 systemd-journald[1019]: Received client request to flush runtime journal. May 17 00:39:06.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.528103 systemd[1]: Starting systemd-sysusers.service... May 17 00:39:06.537962 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:39:06.540193 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:39:06.892274 udevadm[1061]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:39:06.548296 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:39:06.550401 systemd[1]: Starting systemd-udev-settle.service... May 17 00:39:06.573413 systemd[1]: Finished systemd-sysctl.service. May 17 00:39:06.578299 systemd[1]: Finished systemd-sysusers.service. May 17 00:39:06.580923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:39:06.626227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:39:06.701781 systemd[1]: Finished systemd-random-seed.service. May 17 00:39:06.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.703007 systemd[1]: Reached target first-boot-complete.target. May 17 00:39:06.892775 systemd[1]: Finished systemd-journal-flush.service. May 17 00:39:07.085217 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:39:07.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.087487 systemd[1]: Starting systemd-udevd.service... May 17 00:39:07.102616 systemd-udevd[1072]: Using default interface naming scheme 'v252'. May 17 00:39:07.114679 systemd[1]: Started systemd-udevd.service. May 17 00:39:07.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.117584 systemd[1]: Starting systemd-networkd.service... May 17 00:39:07.122251 systemd[1]: Starting systemd-userdbd.service... May 17 00:39:07.165486 systemd[1]: Started systemd-userdbd.service. May 17 00:39:07.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.179911 systemd[1]: Found device dev-ttyS0.device. May 17 00:39:07.194135 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:39:07.194281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:39:07.198133 kernel: ACPI: button: Power Button [PWRF] May 17 00:39:07.211707 systemd-networkd[1079]: lo: Link UP May 17 00:39:07.211990 systemd-networkd[1079]: lo: Gained carrier May 17 00:39:07.212455 systemd-networkd[1079]: Enumeration completed May 17 00:39:07.212630 systemd[1]: Started systemd-networkd.service. May 17 00:39:07.213302 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:07.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.214638 systemd-networkd[1079]: eth0: Link UP May 17 00:39:07.214710 systemd-networkd[1079]: eth0: Gained carrier May 17 00:39:07.218000 audit[1083]: AVC avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:39:07.218000 audit[1083]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561ef782b7f0 a1=338ac a2=7fb0430ffbc5 a3=5 items=110 ppid=1072 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:07.218000 audit: CWD cwd="/" May 17 00:39:07.218000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=1 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=2 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=3 name=(null) inode=15550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=4 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=5 name=(null) inode=15551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=6 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=7 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=8 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=9 name=(null) inode=15553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=10 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=11 name=(null) inode=15554 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=12 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=13 name=(null) inode=15555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=14 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=15 name=(null) inode=15556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=16 name=(null) inode=15552 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=17 name=(null) inode=15557 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=18 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=19 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=20 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=21 name=(null) inode=15559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=22 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=23 name=(null) inode=15560 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=24 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=25 name=(null) inode=15561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=26 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=27 name=(null) inode=15562 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=28 name=(null) inode=15558 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=29 name=(null) inode=15563 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=30 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=31 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=32 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=33 name=(null) inode=15565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=34 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=35 name=(null) inode=15566 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=36 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=37 name=(null) inode=15567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=38 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=39 name=(null) inode=15568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=40 name=(null) inode=15564 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=41 name=(null) inode=15569 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=42 name=(null) inode=15549 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=43 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=44 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=45 name=(null) inode=15571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=46 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=47 name=(null) inode=15572 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=48 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=49 name=(null) inode=15573 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=50 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=51 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=52 name=(null) inode=15570 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=53 name=(null) inode=15575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=55 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=56 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=57 name=(null) inode=15577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=58 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=59 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=60 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=61 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=62 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=63 name=(null) inode=15580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=64 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=65 name=(null) inode=15581 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=66 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=67 name=(null) inode=15582 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=68 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=69 name=(null) inode=15583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=70 name=(null) inode=15579 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=71 name=(null) inode=15584 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=72 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=73 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=74 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=75 name=(null) inode=15586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=76 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=77 name=(null) inode=15587 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=78 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=79 name=(null) inode=15588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=80 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=81 name=(null) inode=15589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=82 name=(null) inode=15585 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=83 name=(null) inode=15590 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=84 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=85 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=86 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=87 name=(null) inode=15592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=88 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=89 name=(null) inode=15593 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=90 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=91 name=(null) inode=15594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=92 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=93 name=(null) inode=15595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=94 name=(null) inode=15591 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=95 name=(null) inode=15596 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=96 name=(null) inode=15576 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=97 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=98 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=99 name=(null) inode=15598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=100 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=101 name=(null) inode=15599 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=102 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=103 name=(null) inode=15600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=104 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=105 name=(null) inode=15601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=106 name=(null) inode=15597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=107 name=(null) inode=15602 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PATH item=109 name=(null) inode=15603 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.218000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:39:07.232493 systemd-networkd[1079]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:39:07.249134 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:39:07.255895 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:39:07.255923 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:39:07.256062 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:39:07.260129 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:39:07.295138 kernel: kvm: Nested Virtualization enabled May 17 00:39:07.295324 kernel: SVM: kvm: Nested Paging enabled May 17 00:39:07.295373 kernel: SVM: Virtual VMLOAD VMSAVE supported May 17 00:39:07.295420 kernel: SVM: Virtual GIF supported May 17 00:39:07.317127 kernel: EDAC MC: Ver: 3.0.0 May 17 00:39:07.342449 systemd[1]: Finished systemd-udev-settle.service. May 17 00:39:07.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.344673 systemd[1]: Starting lvm2-activation-early.service... May 17 00:39:07.352331 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:07.373798 systemd[1]: Finished lvm2-activation-early.service. May 17 00:39:07.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.375036 systemd[1]: Reached target cryptsetup.target. May 17 00:39:07.377255 systemd[1]: Starting lvm2-activation.service... May 17 00:39:07.380631 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:07.406838 systemd[1]: Finished lvm2-activation.service. May 17 00:39:07.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.407823 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:07.408748 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:39:07.408774 systemd[1]: Reached target local-fs.target. May 17 00:39:07.409666 systemd[1]: Reached target machines.target. May 17 00:39:07.411468 systemd[1]: Starting ldconfig.service... May 17 00:39:07.412621 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:07.412663 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:07.413594 systemd[1]: Starting systemd-boot-update.service... May 17 00:39:07.415762 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:39:07.418114 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:39:07.420384 systemd[1]: Starting systemd-sysext.service... May 17 00:39:07.422050 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) May 17 00:39:07.422962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:39:07.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.427473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:39:07.434063 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:39:07.437950 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:39:07.438130 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:39:07.447131 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:39:07.471838 systemd-fsck[1122]: fsck.fat 4.2 (2021-01-31) May 17 00:39:07.471838 systemd-fsck[1122]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:39:07.473349 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:39:07.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.478686 systemd[1]: Mounting boot.mount... May 17 00:39:07.703618 systemd[1]: Mounted boot.mount. May 17 00:39:07.709228 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:39:07.719165 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:39:07.719830 systemd[1]: Finished systemd-boot-update.service. May 17 00:39:07.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.721396 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:39:07.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.731123 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:39:07.735563 (sd-sysext)[1134]: Using extensions 'kubernetes'. May 17 00:39:07.735862 (sd-sysext)[1134]: Merged extensions into '/usr'. May 17 00:39:07.752755 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:07.754254 systemd[1]: Mounting usr-share-oem.mount... May 17 00:39:07.755402 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:07.756429 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:07.758355 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:07.759131 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:39:07.760425 systemd[1]: Starting modprobe@loop.service... May 17 00:39:07.761525 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:07.761669 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:07.761813 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:07.765340 systemd[1]: Mounted usr-share-oem.mount. May 17 00:39:07.767465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:07.767599 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:07.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.768977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:07.769138 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:07.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.770499 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:07.770635 systemd[1]: Finished modprobe@loop.service. May 17 00:39:07.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.771892 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:07.771974 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:07.772858 systemd[1]: Finished systemd-sysext.service. May 17 00:39:07.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.774904 systemd[1]: Starting ensure-sysext.service... May 17 00:39:07.776593 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:39:07.780820 systemd[1]: Finished ldconfig.service. May 17 00:39:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.783880 systemd[1]: Reloading. May 17 00:39:07.787694 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:39:07.788710 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:39:07.790936 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:39:07.828609 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-05-17T00:39:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:07.828641 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-05-17T00:39:07Z" level=info msg="torcx already run" May 17 00:39:07.905559 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:07.905580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:07.925034 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:07.974642 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:39:07.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.978663 systemd[1]: Starting audit-rules.service... May 17 00:39:07.980629 systemd[1]: Starting clean-ca-certificates.service... May 17 00:39:07.982629 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:39:08.010049 systemd[1]: Starting systemd-resolved.service... May 17 00:39:08.012377 systemd[1]: Starting systemd-timesyncd.service... May 17 00:39:08.014693 systemd[1]: Starting systemd-update-utmp.service... May 17 00:39:08.017444 systemd[1]: Finished clean-ca-certificates.service. May 17 00:39:08.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.021000 audit[1229]: SYSTEM_BOOT pid=1229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:39:08.022663 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:39:08.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.027544 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.027837 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.029652 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.031808 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.033716 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.034759 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.035053 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.036881 systemd[1]: Starting systemd-update-done.service... May 17 00:39:08.037960 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:08.038275 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.040074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.040384 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.041851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.041990 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.043490 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.043706 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.045069 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:08.045864 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:08.046572 systemd[1]: Finished systemd-update-utmp.service. May 17 00:39:08.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.048153 systemd[1]: Finished systemd-update-done.service. May 17 00:39:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.051843 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.052079 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.053341 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.055680 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.057425 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.058304 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.058419 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.058543 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:08.058621 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.059537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.059668 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.060985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.061179 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.062557 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.062720 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.064039 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:08.064143 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:08.067195 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.067435 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.068370 augenrules[1256]: No rules May 17 00:39:08.068570 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.068000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:39:08.068000 audit[1256]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef980e590 a2=420 a3=0 items=0 ppid=1217 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:08.068000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:39:08.070861 systemd[1]: Starting modprobe@drm.service... May 17 00:39:08.072669 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.074607 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.075892 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.076021 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.077453 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:39:08.078832 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:08.078948 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.085911 systemd[1]: Finished audit-rules.service. May 17 00:39:08.087827 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.088020 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.089490 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:08.089669 systemd[1]: Finished modprobe@drm.service. May 17 00:39:08.091232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.091430 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.093094 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.093354 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.094200 systemd-resolved[1221]: Positive Trust Anchors: May 17 00:39:08.094404 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:39:08.094498 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:39:08.095021 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:08.095135 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:08.097207 systemd[1]: Finished ensure-sysext.service. May 17 00:39:08.102017 systemd-resolved[1221]: Defaulting to hostname 'linux'. May 17 00:39:08.103364 systemd[1]: Started systemd-resolved.service. May 17 00:39:08.104424 systemd[1]: Started systemd-timesyncd.service. May 17 00:39:08.105415 systemd[1]: Reached target network.target. May 17 00:39:08.106303 systemd[1]: Reached target nss-lookup.target. May 17 00:39:08.107002 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:39:08.107053 systemd-timesyncd[1223]: Initial clock synchronization to Sat 2025-05-17 00:39:08.139726 UTC. May 17 00:39:08.107382 systemd[1]: Reached target sysinit.target. May 17 00:39:08.108317 systemd[1]: Started motdgen.path. May 17 00:39:08.109087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:39:08.110299 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:39:08.111247 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:39:08.111273 systemd[1]: Reached target paths.target. May 17 00:39:08.112077 systemd[1]: Reached target time-set.target. May 17 00:39:08.113060 systemd[1]: Started logrotate.timer. May 17 00:39:08.113950 systemd[1]: Started mdadm.timer. May 17 00:39:08.114688 systemd[1]: Reached target timers.target. May 17 00:39:08.115751 systemd[1]: Listening on dbus.socket. May 17 00:39:08.117717 systemd[1]: Starting docker.socket... May 17 00:39:08.119334 systemd[1]: Listening on sshd.socket. May 17 00:39:08.120264 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.120648 systemd[1]: Listening on docker.socket. May 17 00:39:08.121498 systemd[1]: Reached target sockets.target. May 17 00:39:08.122374 systemd[1]: Reached target basic.target. May 17 00:39:08.123346 systemd[1]: System is tainted: cgroupsv1 May 17 00:39:08.123386 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:08.123407 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:08.124393 systemd[1]: Starting containerd.service... May 17 00:39:08.126132 systemd[1]: Starting dbus.service... May 17 00:39:08.127665 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:39:08.129677 systemd[1]: Starting extend-filesystems.service... May 17 00:39:08.130565 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:39:08.131675 systemd[1]: Starting motdgen.service... May 17 00:39:08.133496 systemd[1]: Starting prepare-helm.service... May 17 00:39:08.134456 jq[1280]: false May 17 00:39:08.135476 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:39:08.137491 systemd[1]: Starting sshd-keygen.service... May 17 00:39:08.140204 systemd[1]: Starting systemd-logind.service... May 17 00:39:08.141453 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.141520 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:39:08.164622 dbus-daemon[1279]: [system] SELinux support is enabled May 17 00:39:08.142683 systemd[1]: Starting update-engine.service... May 17 00:39:08.144884 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:39:08.147906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:39:08.169847 tar[1301]: linux-amd64/helm May 17 00:39:08.148196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:39:08.149056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:39:08.170571 jq[1299]: true May 17 00:39:08.149289 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:39:08.170750 jq[1308]: true May 17 00:39:08.164764 systemd[1]: Started dbus.service. May 17 00:39:08.167285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:39:08.167306 systemd[1]: Reached target system-config.target. May 17 00:39:08.168262 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:39:08.168275 systemd[1]: Reached target user-config.target. May 17 00:39:08.171742 extend-filesystems[1281]: Found loop1 May 17 00:39:08.172702 extend-filesystems[1281]: Found sr0 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda May 17 00:39:08.172702 extend-filesystems[1281]: Found vda1 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda2 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda3 May 17 00:39:08.172702 extend-filesystems[1281]: Found usr May 17 00:39:08.172702 extend-filesystems[1281]: Found vda4 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda6 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda7 May 17 00:39:08.172702 extend-filesystems[1281]: Found vda9 May 17 00:39:08.172702 extend-filesystems[1281]: Checking size of /dev/vda9 May 17 00:39:08.175118 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:39:08.175356 systemd[1]: Finished motdgen.service. May 17 00:39:08.195679 update_engine[1294]: I0517 00:39:08.194983 1294 main.cc:92] Flatcar Update Engine starting May 17 00:39:08.196741 systemd[1]: Started update-engine.service. May 17 00:39:08.196817 update_engine[1294]: I0517 00:39:08.196784 1294 update_check_scheduler.cc:74] Next update check in 7m6s May 17 00:39:08.202874 env[1303]: time="2025-05-17T00:39:08.202152525Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:39:08.206490 systemd[1]: Started locksmithd.service. May 17 00:39:08.209418 extend-filesystems[1281]: Resized partition /dev/vda9 May 17 00:39:08.234693 env[1303]: time="2025-05-17T00:39:08.234594988Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:39:08.234941 env[1303]: time="2025-05-17T00:39:08.234924185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.235934 env[1303]: time="2025-05-17T00:39:08.235911587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.236014 env[1303]: time="2025-05-17T00:39:08.235996186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.236344 env[1303]: time="2025-05-17T00:39:08.236326956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.236418 env[1303]: time="2025-05-17T00:39:08.236399773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.236509 env[1303]: time="2025-05-17T00:39:08.236481426Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:39:08.236579 env[1303]: time="2025-05-17T00:39:08.236561516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.236714 env[1303]: time="2025-05-17T00:39:08.236696770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.236966 env[1303]: time="2025-05-17T00:39:08.236949925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.237182 env[1303]: time="2025-05-17T00:39:08.237164367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.237255 env[1303]: time="2025-05-17T00:39:08.237236492Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:39:08.237369 env[1303]: time="2025-05-17T00:39:08.237351739Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:39:08.237440 env[1303]: time="2025-05-17T00:39:08.237421840Z" level=info msg="metadata content store policy set" policy=shared May 17 00:39:08.264900 extend-filesystems[1339]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:39:08.273650 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:39:08.273669 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:39:08.274743 systemd-logind[1293]: New seat seat0. May 17 00:39:08.279035 systemd[1]: Started systemd-logind.service. May 17 00:39:08.344376 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:39:08.354122 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:39:08.434622 env[1303]: time="2025-05-17T00:39:08.434551238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:39:08.434622 env[1303]: time="2025-05-17T00:39:08.434617984Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:39:08.434622 env[1303]: time="2025-05-17T00:39:08.434632591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:39:08.434797 env[1303]: time="2025-05-17T00:39:08.434723411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434797 env[1303]: time="2025-05-17T00:39:08.434753117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434797 env[1303]: time="2025-05-17T00:39:08.434768165Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434797 env[1303]: time="2025-05-17T00:39:08.434781781Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434797 env[1303]: time="2025-05-17T00:39:08.434795346Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434916 env[1303]: time="2025-05-17T00:39:08.434809232Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434916 env[1303]: time="2025-05-17T00:39:08.434824431Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434916 env[1303]: time="2025-05-17T00:39:08.434837365Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:39:08.434916 env[1303]: time="2025-05-17T00:39:08.434850299Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:39:08.435002 env[1303]: time="2025-05-17T00:39:08.434962520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:39:08.435053 env[1303]: time="2025-05-17T00:39:08.435033142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:39:08.435397 env[1303]: time="2025-05-17T00:39:08.435381496Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:39:08.435450 env[1303]: time="2025-05-17T00:39:08.435408366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435450 env[1303]: time="2025-05-17T00:39:08.435421811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:39:08.435529 env[1303]: time="2025-05-17T00:39:08.435468389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435529 env[1303]: time="2025-05-17T00:39:08.435481854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435529 env[1303]: time="2025-05-17T00:39:08.435509346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435602 env[1303]: time="2025-05-17T00:39:08.435527429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435602 env[1303]: time="2025-05-17T00:39:08.435544612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435602 env[1303]: time="2025-05-17T00:39:08.435559900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435602 env[1303]: time="2025-05-17T00:39:08.435574548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435602 env[1303]: time="2025-05-17T00:39:08.435588905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435714 env[1303]: time="2025-05-17T00:39:08.435605506Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:39:08.435738 env[1303]: time="2025-05-17T00:39:08.435722245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435761 env[1303]: time="2025-05-17T00:39:08.435736752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435761 env[1303]: time="2025-05-17T00:39:08.435748975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:39:08.435807 env[1303]: time="2025-05-17T00:39:08.435760186Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:39:08.435807 env[1303]: time="2025-05-17T00:39:08.435775174Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:39:08.435807 env[1303]: time="2025-05-17T00:39:08.435786525Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:39:08.435873 env[1303]: time="2025-05-17T00:39:08.435804409Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:39:08.435873 env[1303]: time="2025-05-17T00:39:08.435837721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:39:08.436080 env[1303]: time="2025-05-17T00:39:08.436033298Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:39:08.437695 env[1303]: time="2025-05-17T00:39:08.436091688Z" level=info msg="Connect containerd service" May 17 00:39:08.437695 env[1303]: time="2025-05-17T00:39:08.436139548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:39:08.437695 env[1303]: time="2025-05-17T00:39:08.436720638Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:39:08.437695 env[1303]: time="2025-05-17T00:39:08.436968132Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:39:08.437695 env[1303]: time="2025-05-17T00:39:08.437002917Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:39:08.437173 systemd[1]: Started containerd.service. May 17 00:39:08.438927 env[1303]: time="2025-05-17T00:39:08.438910986Z" level=info msg="containerd successfully booted in 0.238416s" May 17 00:39:08.444128 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:39:08.447059 env[1303]: time="2025-05-17T00:39:08.446950987Z" level=info msg="Start subscribing containerd event" May 17 00:39:08.447692 env[1303]: time="2025-05-17T00:39:08.447433472Z" level=info msg="Start recovering state" May 17 00:39:08.447692 env[1303]: time="2025-05-17T00:39:08.447532267Z" level=info msg="Start event monitor" May 17 00:39:08.447692 env[1303]: time="2025-05-17T00:39:08.447552445Z" level=info msg="Start snapshots syncer" May 17 00:39:08.447692 env[1303]: time="2025-05-17T00:39:08.447564037Z" level=info msg="Start cni network conf syncer for default" May 17 00:39:08.447692 env[1303]: time="2025-05-17T00:39:08.447574186Z" level=info msg="Start streaming server" May 17 00:39:08.451080 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:39:08.485732 extend-filesystems[1339]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:39:08.485732 extend-filesystems[1339]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:39:08.485732 extend-filesystems[1339]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:39:08.491981 bash[1333]: Updated "/home/core/.ssh/authorized_keys" May 17 00:39:08.488468 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:39:08.492123 extend-filesystems[1281]: Resized filesystem in /dev/vda9 May 17 00:39:08.488743 systemd[1]: Finished extend-filesystems.service. May 17 00:39:08.500550 sshd_keygen[1316]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:39:08.522580 systemd[1]: Finished sshd-keygen.service. May 17 00:39:08.525426 systemd[1]: Starting issuegen.service... May 17 00:39:08.533259 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:39:08.533533 systemd[1]: Finished issuegen.service. May 17 00:39:08.536070 systemd[1]: Starting systemd-user-sessions.service... May 17 00:39:08.540540 systemd[1]: Finished systemd-user-sessions.service. May 17 00:39:08.542771 systemd[1]: Started getty@tty1.service. May 17 00:39:08.544700 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:39:08.545848 systemd[1]: Reached target getty.target. May 17 00:39:08.619285 tar[1301]: linux-amd64/LICENSE May 17 00:39:08.619407 tar[1301]: linux-amd64/README.md May 17 00:39:08.623662 systemd[1]: Finished prepare-helm.service. May 17 00:39:08.971315 systemd-networkd[1079]: eth0: Gained IPv6LL May 17 00:39:08.973329 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:39:08.974932 systemd[1]: Reached target network-online.target. May 17 00:39:08.977410 systemd[1]: Starting kubelet.service... May 17 00:39:09.638849 systemd[1]: Started kubelet.service. May 17 00:39:09.640187 systemd[1]: Reached target multi-user.target. May 17 00:39:09.642457 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:39:09.649834 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:39:09.650012 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:39:09.652061 systemd[1]: Startup finished in 5.496s (kernel) + 6.069s (userspace) = 11.565s. May 17 00:39:10.046873 kubelet[1381]: E0517 00:39:10.046756 1381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:10.048288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:10.048425 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:11.468707 systemd[1]: Created slice system-sshd.slice. May 17 00:39:11.469858 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:44448.service. May 17 00:39:11.507713 sshd[1391]: Accepted publickey for core from 10.0.0.1 port 44448 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.509079 sshd[1391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.516205 systemd[1]: Created slice user-500.slice. May 17 00:39:11.517023 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:39:11.518493 systemd-logind[1293]: New session 1 of user core. May 17 00:39:11.525055 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:39:11.525980 systemd[1]: Starting user@500.service... May 17 00:39:11.529320 (systemd)[1396]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.606162 systemd[1396]: Queued start job for default target default.target. May 17 00:39:11.606371 systemd[1396]: Reached target paths.target. May 17 00:39:11.606387 systemd[1396]: Reached target sockets.target. May 17 00:39:11.606399 systemd[1396]: Reached target timers.target. May 17 00:39:11.606410 systemd[1396]: Reached target basic.target. May 17 00:39:11.606449 systemd[1396]: Reached target default.target. May 17 00:39:11.606484 systemd[1396]: Startup finished in 70ms. May 17 00:39:11.606598 systemd[1]: Started user@500.service. May 17 00:39:11.607656 systemd[1]: Started session-1.scope. May 17 00:39:11.657084 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:44460.service. May 17 00:39:11.695092 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.696373 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.700027 systemd-logind[1293]: New session 2 of user core. May 17 00:39:11.700780 systemd[1]: Started session-2.scope. May 17 00:39:11.754682 sshd[1405]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.757327 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:44476.service. May 17 00:39:11.757793 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:44460.service: Deactivated successfully. May 17 00:39:11.758827 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:39:11.758909 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. May 17 00:39:11.759907 systemd-logind[1293]: Removed session 2. May 17 00:39:11.794957 sshd[1410]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.796122 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.799745 systemd-logind[1293]: New session 3 of user core. May 17 00:39:11.800572 systemd[1]: Started session-3.scope. May 17 00:39:11.851421 sshd[1410]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.854171 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:44488.service. May 17 00:39:11.854700 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:44476.service: Deactivated successfully. May 17 00:39:11.856472 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:39:11.856512 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. May 17 00:39:11.857640 systemd-logind[1293]: Removed session 3. May 17 00:39:11.889493 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 44488 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.890645 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.894449 systemd-logind[1293]: New session 4 of user core. May 17 00:39:11.895083 systemd[1]: Started session-4.scope. May 17 00:39:11.949564 sshd[1418]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.952035 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:44490.service. May 17 00:39:11.952452 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:44488.service: Deactivated successfully. May 17 00:39:11.953359 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:39:11.953470 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. May 17 00:39:11.954379 systemd-logind[1293]: Removed session 4. May 17 00:39:11.987946 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.989390 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.993122 systemd-logind[1293]: New session 5 of user core. May 17 00:39:11.993766 systemd[1]: Started session-5.scope. May 17 00:39:12.051916 sudo[1430]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:39:12.052165 sudo[1430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:12.060326 dbus-daemon[1279]: \xd0=d\x88!V: received setenforce notice (enforcing=632368752) May 17 00:39:12.061981 sudo[1430]: pam_unix(sudo:session): session closed for user root May 17 00:39:12.063504 sshd[1424]: pam_unix(sshd:session): session closed for user core May 17 00:39:12.066255 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:44496.service. May 17 00:39:12.066800 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:44490.service: Deactivated successfully. May 17 00:39:12.067702 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:39:12.067751 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. May 17 00:39:12.068640 systemd-logind[1293]: Removed session 5. May 17 00:39:12.100360 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 44496 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:12.101412 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:12.104894 systemd-logind[1293]: New session 6 of user core. May 17 00:39:12.105839 systemd[1]: Started session-6.scope. May 17 00:39:12.159229 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:39:12.159465 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:12.161707 sudo[1439]: pam_unix(sudo:session): session closed for user root May 17 00:39:12.166295 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:39:12.166531 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:12.174391 systemd[1]: Stopping audit-rules.service... May 17 00:39:12.175000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:39:12.175693 auditctl[1442]: No rules May 17 00:39:12.175984 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:39:12.176222 systemd[1]: Stopped audit-rules.service. May 17 00:39:12.176653 kernel: kauditd_printk_skb: 229 callbacks suppressed May 17 00:39:12.176695 kernel: audit: type=1305 audit(1747442352.175:150): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:39:12.177836 systemd[1]: Starting audit-rules.service... May 17 00:39:12.175000 audit[1442]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfdd2cfe0 a2=420 a3=0 items=0 ppid=1 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:12.186764 kernel: audit: type=1300 audit(1747442352.175:150): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfdd2cfe0 a2=420 a3=0 items=0 ppid=1 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:12.186823 kernel: audit: type=1327 audit(1747442352.175:150): proctitle=2F7362696E2F617564697463746C002D44 May 17 00:39:12.186842 kernel: audit: type=1131 audit(1747442352.176:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.175000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:39:12.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.193768 augenrules[1460]: No rules May 17 00:39:12.194703 systemd[1]: Finished audit-rules.service. May 17 00:39:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.198570 sudo[1438]: pam_unix(sudo:session): session closed for user root May 17 00:39:12.199136 kernel: audit: type=1130 audit(1747442352.194:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.199178 kernel: audit: type=1106 audit(1747442352.198:153): pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.198000 audit[1438]: USER_END pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.200029 sshd[1433]: pam_unix(sshd:session): session closed for user core May 17 00:39:12.201971 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:44504.service. May 17 00:39:12.198000 audit[1438]: CRED_DISP pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.203709 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:44496.service: Deactivated successfully. May 17 00:39:12.204474 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:39:12.205237 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. May 17 00:39:12.206224 systemd-logind[1293]: Removed session 6. May 17 00:39:12.207413 kernel: audit: type=1104 audit(1747442352.198:154): pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.207453 kernel: audit: type=1130 audit(1747442352.201:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:44504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:44504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.202000 audit[1433]: USER_END pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.216545 kernel: audit: type=1106 audit(1747442352.202:156): pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.216592 kernel: audit: type=1104 audit(1747442352.202:157): pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.202000 audit[1433]: CRED_DISP pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.136:22-10.0.0.1:44496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.246000 audit[1465]: USER_ACCT pid=1465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.247712 sshd[1465]: Accepted publickey for core from 10.0.0.1 port 44504 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:12.247000 audit[1465]: CRED_ACQ pid=1465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.247000 audit[1465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9c069620 a2=3 a3=0 items=0 ppid=1 pid=1465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:12.247000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:39:12.248799 sshd[1465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:12.252208 systemd-logind[1293]: New session 7 of user core. May 17 00:39:12.252839 systemd[1]: Started session-7.scope. May 17 00:39:12.256000 audit[1465]: USER_START pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.257000 audit[1470]: CRED_ACQ pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:39:12.306000 audit[1471]: USER_ACCT pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.306000 audit[1471]: CRED_REFR pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.307928 sudo[1471]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:39:12.308193 sudo[1471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:12.308000 audit[1471]: USER_START pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:39:12.327521 systemd[1]: Starting docker.service... May 17 00:39:12.360915 env[1483]: time="2025-05-17T00:39:12.360864827Z" level=info msg="Starting up" May 17 00:39:12.361957 env[1483]: time="2025-05-17T00:39:12.361930369Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:39:12.361957 env[1483]: time="2025-05-17T00:39:12.361943887Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:39:12.362026 env[1483]: time="2025-05-17T00:39:12.361959974Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:39:12.362026 env[1483]: time="2025-05-17T00:39:12.361968143Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:39:12.363322 env[1483]: time="2025-05-17T00:39:12.363295055Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:39:12.363322 env[1483]: time="2025-05-17T00:39:12.363315126Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:39:12.363390 env[1483]: time="2025-05-17T00:39:12.363329859Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:39:12.363390 env[1483]: time="2025-05-17T00:39:12.363340507Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:39:13.820037 env[1483]: time="2025-05-17T00:39:13.819982090Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:39:13.820037 env[1483]: time="2025-05-17T00:39:13.820008592Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:39:13.820574 env[1483]: time="2025-05-17T00:39:13.820261254Z" level=info msg="Loading containers: start." May 17 00:39:13.867000 audit[1517]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.867000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec25443b0 a2=0 a3=7ffec254439c items=0 ppid=1483 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.867000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:39:13.868000 audit[1519]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.868000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc4f32f530 a2=0 a3=7ffc4f32f51c items=0 ppid=1483 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.868000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:39:13.869000 audit[1521]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.869000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff91eefb00 a2=0 a3=7fff91eefaec items=0 ppid=1483 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.869000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:39:13.871000 audit[1523]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.871000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda8293970 a2=0 a3=7ffda829395c items=0 ppid=1483 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.871000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:39:13.872000 audit[1525]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.872000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc62badb90 a2=0 a3=7ffc62badb7c items=0 ppid=1483 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.872000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:39:13.885000 audit[1530]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:13.885000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd94d56bd0 a2=0 a3=7ffd94d56bbc items=0 ppid=1483 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.885000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:39:14.020000 audit[1532]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.020000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd87290fe0 a2=0 a3=7ffd87290fcc items=0 ppid=1483 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.020000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:39:14.022000 audit[1534]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.022000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffced479970 a2=0 a3=7ffced47995c items=0 ppid=1483 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.022000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:39:14.023000 audit[1536]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.023000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffdffb77cb0 a2=0 a3=7ffdffb77c9c items=0 ppid=1483 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:39:14.119000 audit[1540]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.119000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcf7c14c40 a2=0 a3=7ffcf7c14c2c items=0 ppid=1483 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.119000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:39:14.131000 audit[1541]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.131000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeb6126a00 a2=0 a3=7ffeb61269ec items=0 ppid=1483 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:39:14.142133 kernel: Initializing XFRM netlink socket May 17 00:39:14.172442 env[1483]: time="2025-05-17T00:39:14.172404868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:39:14.187000 audit[1549]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.187000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffef293dcf0 a2=0 a3=7ffef293dcdc items=0 ppid=1483 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.187000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:39:14.200000 audit[1552]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.200000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffee81db4b0 a2=0 a3=7ffee81db49c items=0 ppid=1483 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.200000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:39:14.202000 audit[1555]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.202000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd381590f0 a2=0 a3=7ffd381590dc items=0 ppid=1483 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.202000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:39:14.204000 audit[1557]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.204000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff2f564900 a2=0 a3=7fff2f5648ec items=0 ppid=1483 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:39:14.206000 audit[1559]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.206000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff8e4dfa70 a2=0 a3=7fff8e4dfa5c items=0 ppid=1483 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.206000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:39:14.207000 audit[1561]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.207000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd8a5230c0 a2=0 a3=7ffd8a5230ac items=0 ppid=1483 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:39:14.209000 audit[1563]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.209000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd150eafe0 a2=0 a3=7ffd150eafcc items=0 ppid=1483 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.209000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:39:14.215000 audit[1566]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.215000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc93ca6e60 a2=0 a3=7ffc93ca6e4c items=0 ppid=1483 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.215000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:39:14.217000 audit[1568]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.217000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffcad7aef40 a2=0 a3=7ffcad7aef2c items=0 ppid=1483 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.217000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:39:14.218000 audit[1570]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.218000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe81d17b80 a2=0 a3=7ffe81d17b6c items=0 ppid=1483 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.218000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:39:14.220000 audit[1572]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.220000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffaf58f230 a2=0 a3=7fffaf58f21c items=0 ppid=1483 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.220000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:39:14.221956 systemd-networkd[1079]: docker0: Link UP May 17 00:39:14.489000 audit[1576]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.489000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffa18f8d70 a2=0 a3=7fffa18f8d5c items=0 ppid=1483 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.489000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:39:14.495000 audit[1577]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:14.495000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe650f2ab0 a2=0 a3=7ffe650f2a9c items=0 ppid=1483 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:14.495000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:39:14.497102 env[1483]: time="2025-05-17T00:39:14.497068524Z" level=info msg="Loading containers: done." May 17 00:39:14.575385 env[1483]: time="2025-05-17T00:39:14.575325982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:39:14.575547 env[1483]: time="2025-05-17T00:39:14.575536638Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:39:14.575655 env[1483]: time="2025-05-17T00:39:14.575634496Z" level=info msg="Daemon has completed initialization" May 17 00:39:14.655796 systemd[1]: Started docker.service. May 17 00:39:14.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:14.665008 env[1483]: time="2025-05-17T00:39:14.664942809Z" level=info msg="API listen on /run/docker.sock" May 17 00:39:15.336908 env[1303]: time="2025-05-17T00:39:15.336834470Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:39:15.958951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509424051.mount: Deactivated successfully. May 17 00:39:17.950872 env[1303]: time="2025-05-17T00:39:17.950804193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:17.952822 env[1303]: time="2025-05-17T00:39:17.952789417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:17.954817 env[1303]: time="2025-05-17T00:39:17.954788675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:17.956426 env[1303]: time="2025-05-17T00:39:17.956397193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:17.957146 env[1303]: time="2025-05-17T00:39:17.957120772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:39:17.957707 env[1303]: time="2025-05-17T00:39:17.957673884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:39:19.874336 env[1303]: time="2025-05-17T00:39:19.874272907Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.900738 env[1303]: time="2025-05-17T00:39:19.900649120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.944371 env[1303]: time="2025-05-17T00:39:19.944307673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.966439 env[1303]: time="2025-05-17T00:39:19.966370201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.967146 env[1303]: time="2025-05-17T00:39:19.967083182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:39:19.967643 env[1303]: time="2025-05-17T00:39:19.967620656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:39:20.064972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:39:20.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.065210 systemd[1]: Stopped kubelet.service. May 17 00:39:20.066142 kernel: kauditd_printk_skb: 84 callbacks suppressed May 17 00:39:20.066221 kernel: audit: type=1130 audit(1747442360.064:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.066721 systemd[1]: Starting kubelet.service... May 17 00:39:20.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.072327 kernel: audit: type=1131 audit(1747442360.064:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.632540 systemd[1]: Started kubelet.service. May 17 00:39:20.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.637146 kernel: audit: type=1130 audit(1747442360.631:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:20.668765 kubelet[1622]: E0517 00:39:20.668711 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:20.672548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:20.672681 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:20.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:39:20.677137 kernel: audit: type=1131 audit(1747442360.671:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:39:24.768083 env[1303]: time="2025-05-17T00:39:24.768028010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.780232 env[1303]: time="2025-05-17T00:39:24.780172238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.783434 env[1303]: time="2025-05-17T00:39:24.783395406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.785614 env[1303]: time="2025-05-17T00:39:24.785581864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.786216 env[1303]: time="2025-05-17T00:39:24.786194359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:39:24.786681 env[1303]: time="2025-05-17T00:39:24.786645626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:39:28.344679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817285276.mount: Deactivated successfully. May 17 00:39:29.210721 env[1303]: time="2025-05-17T00:39:29.210659101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.219965 env[1303]: time="2025-05-17T00:39:29.219940360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.224547 env[1303]: time="2025-05-17T00:39:29.224508608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.232637 env[1303]: time="2025-05-17T00:39:29.232596076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.233097 env[1303]: time="2025-05-17T00:39:29.233061812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:39:29.233567 env[1303]: time="2025-05-17T00:39:29.233547145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:39:29.749485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199938841.mount: Deactivated successfully. May 17 00:39:30.815025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:39:30.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:30.815224 systemd[1]: Stopped kubelet.service. May 17 00:39:30.817006 systemd[1]: Starting kubelet.service... May 17 00:39:30.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:30.822844 kernel: audit: type=1130 audit(1747442370.814:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:30.822888 kernel: audit: type=1131 audit(1747442370.814:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:30.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:30.900750 systemd[1]: Started kubelet.service. May 17 00:39:30.905133 kernel: audit: type=1130 audit(1747442370.899:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:31.213471 kubelet[1638]: E0517 00:39:31.213338 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:31.215591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:31.215757 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:31.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:39:31.220139 kernel: audit: type=1131 audit(1747442371.214:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:39:31.625336 env[1303]: time="2025-05-17T00:39:31.625288517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:31.629231 env[1303]: time="2025-05-17T00:39:31.629207015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:31.631182 env[1303]: time="2025-05-17T00:39:31.631154460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:31.633439 env[1303]: time="2025-05-17T00:39:31.633398519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:31.634181 env[1303]: time="2025-05-17T00:39:31.634146511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:39:31.634784 env[1303]: time="2025-05-17T00:39:31.634644282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:39:32.171720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112920442.mount: Deactivated successfully. May 17 00:39:32.179257 env[1303]: time="2025-05-17T00:39:32.179195575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.181166 env[1303]: time="2025-05-17T00:39:32.181101493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.182594 env[1303]: time="2025-05-17T00:39:32.182559203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.183971 env[1303]: time="2025-05-17T00:39:32.183946698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.184416 env[1303]: time="2025-05-17T00:39:32.184381966Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:39:32.184802 env[1303]: time="2025-05-17T00:39:32.184779446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:39:33.314594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319840559.mount: Deactivated successfully. May 17 00:39:38.026524 env[1303]: time="2025-05-17T00:39:38.026446166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:38.087151 env[1303]: time="2025-05-17T00:39:38.087074457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:38.113127 env[1303]: time="2025-05-17T00:39:38.113052911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:38.123063 env[1303]: time="2025-05-17T00:39:38.123025415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:38.124120 env[1303]: time="2025-05-17T00:39:38.124041871Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:39:40.942426 systemd[1]: Stopped kubelet.service. May 17 00:39:40.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:40.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:40.946686 systemd[1]: Starting kubelet.service... May 17 00:39:40.950228 kernel: audit: type=1130 audit(1747442380.941:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:40.950296 kernel: audit: type=1131 audit(1747442380.941:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:40.969274 systemd[1]: Reloading. May 17 00:39:41.032145 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-05-17T00:39:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:41.032487 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-05-17T00:39:41Z" level=info msg="torcx already run" May 17 00:39:41.402182 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:41.402197 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:41.420845 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:41.487178 systemd[1]: Started kubelet.service. May 17 00:39:41.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.490762 systemd[1]: Stopping kubelet.service... May 17 00:39:41.492892 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:39:41.493163 systemd[1]: Stopped kubelet.service. May 17 00:39:41.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.522787 systemd[1]: Starting kubelet.service... May 17 00:39:41.526668 kernel: audit: type=1130 audit(1747442381.486:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.526737 kernel: audit: type=1131 audit(1747442381.491:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.606647 systemd[1]: Started kubelet.service. May 17 00:39:41.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.613140 kernel: audit: type=1130 audit(1747442381.606:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:41.638473 kubelet[1764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:41.638473 kubelet[1764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:39:41.638473 kubelet[1764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:41.638904 kubelet[1764]: I0517 00:39:41.638517 1764 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:39:41.916292 kubelet[1764]: I0517 00:39:41.916246 1764 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:39:41.916292 kubelet[1764]: I0517 00:39:41.916277 1764 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:39:41.916581 kubelet[1764]: I0517 00:39:41.916563 1764 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:39:41.942714 kubelet[1764]: E0517 00:39:41.942677 1764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:41.943309 kubelet[1764]: I0517 00:39:41.943293 1764 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:39:41.948837 kubelet[1764]: E0517 00:39:41.948805 1764 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:39:41.948837 kubelet[1764]: I0517 00:39:41.948829 1764 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:39:41.953764 kubelet[1764]: I0517 00:39:41.953740 1764 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:39:41.954410 kubelet[1764]: I0517 00:39:41.954387 1764 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:39:41.954530 kubelet[1764]: I0517 00:39:41.954495 1764 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:39:41.954697 kubelet[1764]: I0517 00:39:41.954524 1764 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:39:41.954784 kubelet[1764]: I0517 00:39:41.954700 1764 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:39:41.954784 kubelet[1764]: I0517 00:39:41.954709 1764 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:39:41.954832 kubelet[1764]: I0517 00:39:41.954797 1764 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:41.962594 kubelet[1764]: I0517 00:39:41.962568 1764 kubelet.go:408] "Attempting to sync node with API server" May 17 00:39:41.962594 kubelet[1764]: I0517 00:39:41.962594 1764 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:39:41.962674 kubelet[1764]: I0517 00:39:41.962625 1764 kubelet.go:314] "Adding apiserver pod source" May 17 00:39:41.962674 kubelet[1764]: I0517 00:39:41.962639 1764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:39:41.975100 kubelet[1764]: W0517 00:39:41.975045 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:41.975161 kubelet[1764]: E0517 00:39:41.975127 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:41.977066 kubelet[1764]: W0517 00:39:41.977031 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:41.977130 kubelet[1764]: E0517 00:39:41.977075 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:41.977954 kubelet[1764]: I0517 00:39:41.977928 1764 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:39:41.978322 kubelet[1764]: I0517 00:39:41.978302 1764 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:39:41.978373 kubelet[1764]: W0517 00:39:41.978346 1764 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:39:41.980042 kubelet[1764]: I0517 00:39:41.980020 1764 server.go:1274] "Started kubelet" May 17 00:39:41.980263 kubelet[1764]: I0517 00:39:41.980241 1764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:39:41.980583 kubelet[1764]: I0517 00:39:41.980567 1764 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:39:41.980707 kubelet[1764]: I0517 00:39:41.980688 1764 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:39:41.979000 audit[1764]: AVC avc: denied { mac_admin } for pid=1764 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:41.979000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.981076 1764 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.981129 1764 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.981177 1764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.981513 1764 server.go:449] "Adding debug handlers to kubelet server" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.984148 1764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:39:41.986286 kubelet[1764]: E0517 00:39:41.983315 1764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184029968b6142a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:39:41.979992745 +0000 UTC m=+0.369936815,LastTimestamp:2025-05-17 00:39:41.979992745 +0000 UTC m=+0.369936815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:39:41.986286 kubelet[1764]: E0517 00:39:41.985666 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:41.986286 kubelet[1764]: I0517 00:39:41.985694 1764 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:39:41.986573 kernel: audit: type=1400 audit(1747442381.979:205): avc: denied { mac_admin } for pid=1764 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:41.986600 kernel: audit: type=1401 audit(1747442381.979:205): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:41.986618 kernel: audit: type=1300 audit(1747442381.979:205): arch=c000003e syscall=188 success=no exit=-22 a0=c000b4b4d0 a1=c00055dea8 a2=c000b4b4a0 a3=25 items=0 ppid=1 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.979000 audit[1764]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b4b4d0 a1=c00055dea8 a2=c000b4b4a0 a3=25 items=0 ppid=1 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.986745 kubelet[1764]: I0517 00:39:41.985820 1764 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:39:41.986745 kubelet[1764]: I0517 00:39:41.985852 1764 reconciler.go:26] "Reconciler: start to sync state" May 17 00:39:41.991321 kernel: audit: type=1327 audit(1747442381.979:205): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:41.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:41.992856 kubelet[1764]: W0517 00:39:41.992823 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:41.992964 kubelet[1764]: E0517 00:39:41.992941 1764 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:39:41.993043 kubelet[1764]: E0517 00:39:41.993025 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:41.993199 kubelet[1764]: E0517 00:39:41.993164 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="200ms" May 17 00:39:41.993460 kubelet[1764]: I0517 00:39:41.993442 1764 factory.go:221] Registration of the systemd container factory successfully May 17 00:39:41.993543 kubelet[1764]: I0517 00:39:41.993520 1764 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:39:41.994286 kubelet[1764]: I0517 00:39:41.994270 1764 factory.go:221] Registration of the containerd container factory successfully May 17 00:39:41.979000 audit[1764]: AVC avc: denied { mac_admin } for pid=1764 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:41.998630 kernel: audit: type=1400 audit(1747442381.979:206): avc: denied { mac_admin } for pid=1764 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:41.979000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:41.979000 audit[1764]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b6c860 a1=c00055dec0 a2=c000b4b560 a3=25 items=0 ppid=1 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:41.983000 audit[1777]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:41.983000 audit[1777]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdf6c38370 a2=0 a3=7ffdf6c3835c items=0 ppid=1764 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:39:41.983000 audit[1778]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:41.983000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3fa4f490 a2=0 a3=7ffe3fa4f47c items=0 ppid=1764 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:39:41.988000 audit[1780]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:41.988000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff020c8e70 a2=0 a3=7fff020c8e5c items=0 ppid=1764 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:39:41.989000 audit[1782]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:41.989000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd5bd66910 a2=0 a3=7ffd5bd668fc items=0 ppid=1764 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:41.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:39:42.002000 audit[1789]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:42.002000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcf77fe610 a2=0 a3=7ffcf77fe5fc items=0 ppid=1764 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:39:42.003720 kubelet[1764]: I0517 00:39:42.003681 1764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:39:42.003000 audit[1790]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:42.003000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8c0ab050 a2=0 a3=7ffd8c0ab03c items=0 ppid=1764 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:39:42.004542 kubelet[1764]: I0517 00:39:42.004519 1764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:39:42.004628 kubelet[1764]: I0517 00:39:42.004612 1764 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:39:42.004723 kubelet[1764]: I0517 00:39:42.004706 1764 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:39:42.004843 kubelet[1764]: E0517 00:39:42.004823 1764 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:39:42.003000 audit[1791]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:42.003000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0366f080 a2=0 a3=7fff0366f06c items=0 ppid=1764 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:39:42.004000 audit[1792]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:42.004000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce48ee950 a2=0 a3=7ffce48ee93c items=0 ppid=1764 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:39:42.005766 kubelet[1764]: W0517 00:39:42.005545 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:42.005766 kubelet[1764]: E0517 00:39:42.005577 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:42.005000 audit[1793]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:42.005000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff91f8c6c0 a2=0 a3=7fff91f8c6ac items=0 ppid=1764 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:39:42.005000 audit[1794]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:42.005000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd35591de0 a2=0 a3=7ffd35591dcc items=0 ppid=1764 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:39:42.006000 audit[1795]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:42.006000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccb3d8240 a2=0 a3=7ffccb3d822c items=0 ppid=1764 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:39:42.006000 audit[1796]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:42.006000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe33c18540 a2=0 a3=7ffe33c1852c items=0 ppid=1764 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:42.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:39:42.012758 kubelet[1764]: I0517 00:39:42.012715 1764 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:39:42.012758 kubelet[1764]: I0517 00:39:42.012729 1764 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:39:42.012758 kubelet[1764]: I0517 00:39:42.012741 1764 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:42.086049 kubelet[1764]: E0517 00:39:42.086014 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.105289 kubelet[1764]: E0517 00:39:42.105255 1764 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:39:42.186231 kubelet[1764]: E0517 00:39:42.186154 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.194596 kubelet[1764]: E0517 00:39:42.194561 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="400ms" May 17 00:39:42.231064 kubelet[1764]: E0517 00:39:42.230974 1764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.136:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.136:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184029968b6142a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:39:41.979992745 +0000 UTC m=+0.369936815,LastTimestamp:2025-05-17 00:39:41.979992745 +0000 UTC m=+0.369936815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:39:42.287276 kubelet[1764]: E0517 00:39:42.287221 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.305378 kubelet[1764]: E0517 00:39:42.305325 1764 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:39:42.387694 kubelet[1764]: E0517 00:39:42.387667 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.488598 kubelet[1764]: E0517 00:39:42.488483 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.588974 kubelet[1764]: E0517 00:39:42.588931 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.595415 kubelet[1764]: E0517 00:39:42.595376 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="800ms" May 17 00:39:42.689930 kubelet[1764]: E0517 00:39:42.689875 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.706098 kubelet[1764]: E0517 00:39:42.706054 1764 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:39:42.790648 kubelet[1764]: E0517 00:39:42.790537 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.884328 kubelet[1764]: W0517 00:39:42.884263 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:42.884328 kubelet[1764]: E0517 00:39:42.884324 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:42.890811 kubelet[1764]: E0517 00:39:42.890777 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:42.894230 kubelet[1764]: W0517 00:39:42.894203 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:42.894274 kubelet[1764]: E0517 00:39:42.894234 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:42.991561 kubelet[1764]: E0517 00:39:42.991522 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:43.012466 kubelet[1764]: I0517 00:39:43.012434 1764 policy_none.go:49] "None policy: Start" May 17 00:39:43.013098 kubelet[1764]: I0517 00:39:43.013063 1764 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:39:43.013098 kubelet[1764]: I0517 00:39:43.013087 1764 state_mem.go:35] "Initializing new in-memory state store" May 17 00:39:43.041000 audit[1764]: AVC avc: denied { mac_admin } for pid=1764 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:43.041000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:43.041000 audit[1764]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008bdd40 a1=c00087ab58 a2=c0008bdd10 a3=25 items=0 ppid=1 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:43.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:43.043765 kubelet[1764]: I0517 00:39:43.043062 1764 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:39:43.043765 kubelet[1764]: I0517 00:39:43.043126 1764 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:39:43.043765 kubelet[1764]: I0517 00:39:43.043202 1764 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:39:43.043765 kubelet[1764]: I0517 00:39:43.043211 1764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:39:43.044262 kubelet[1764]: I0517 00:39:43.044022 1764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:39:43.044262 kubelet[1764]: W0517 00:39:43.044038 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:43.044262 kubelet[1764]: E0517 00:39:43.044075 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:43.044720 kubelet[1764]: E0517 00:39:43.044699 1764 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:39:43.145430 kubelet[1764]: I0517 00:39:43.145327 1764 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:43.146521 kubelet[1764]: E0517 00:39:43.145747 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 17 00:39:43.232072 kubelet[1764]: W0517 00:39:43.231968 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.136:6443: connect: connection refused May 17 00:39:43.232072 kubelet[1764]: E0517 00:39:43.232068 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:43.346902 kubelet[1764]: I0517 00:39:43.346854 1764 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:43.347327 kubelet[1764]: E0517 00:39:43.347282 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 17 00:39:43.396059 kubelet[1764]: E0517 00:39:43.396017 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.136:6443: connect: connection refused" interval="1.6s" May 17 00:39:43.594148 kubelet[1764]: I0517 00:39:43.594094 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:43.594148 kubelet[1764]: I0517 00:39:43.594138 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:43.594148 kubelet[1764]: I0517 00:39:43.594157 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:43.594345 kubelet[1764]: I0517 00:39:43.594176 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:43.594345 kubelet[1764]: I0517 00:39:43.594191 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:39:43.594345 kubelet[1764]: I0517 00:39:43.594204 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:43.594345 kubelet[1764]: I0517 00:39:43.594217 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:43.594345 kubelet[1764]: I0517 00:39:43.594231 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:43.594456 kubelet[1764]: I0517 00:39:43.594281 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:43.748723 kubelet[1764]: I0517 00:39:43.748607 1764 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:43.749087 kubelet[1764]: E0517 00:39:43.748904 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 17 00:39:43.811752 kubelet[1764]: E0517 00:39:43.811709 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:43.812463 env[1303]: time="2025-05-17T00:39:43.812412131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87bd9fa30544f816d9c59f1458471b66,Namespace:kube-system,Attempt:0,}" May 17 00:39:43.813429 kubelet[1764]: E0517 00:39:43.813411 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:43.813513 kubelet[1764]: E0517 00:39:43.813465 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:43.813745 env[1303]: time="2025-05-17T00:39:43.813655045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 17 00:39:43.813824 env[1303]: time="2025-05-17T00:39:43.813751688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 17 00:39:43.966486 kubelet[1764]: E0517 00:39:43.966443 1764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.136:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:44.381373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718795977.mount: Deactivated successfully. May 17 00:39:44.386967 env[1303]: time="2025-05-17T00:39:44.386913578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.387934 env[1303]: time="2025-05-17T00:39:44.387899318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.388861 env[1303]: time="2025-05-17T00:39:44.388821475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.392393 env[1303]: time="2025-05-17T00:39:44.392334565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.393482 env[1303]: time="2025-05-17T00:39:44.393447750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.394598 env[1303]: time="2025-05-17T00:39:44.394565587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.396043 env[1303]: time="2025-05-17T00:39:44.396007258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.397828 env[1303]: time="2025-05-17T00:39:44.397778910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.399361 env[1303]: time="2025-05-17T00:39:44.399328267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.401337 env[1303]: time="2025-05-17T00:39:44.401304937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.402670 env[1303]: time="2025-05-17T00:39:44.402648375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.403309 env[1303]: time="2025-05-17T00:39:44.403283271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:44.429875 env[1303]: time="2025-05-17T00:39:44.427001297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:44.429875 env[1303]: time="2025-05-17T00:39:44.427044256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:44.429875 env[1303]: time="2025-05-17T00:39:44.427061653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:44.429875 env[1303]: time="2025-05-17T00:39:44.427267002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39f473b161b9a230ce92284728970f920e7fb2214e78a02e9aebb7febfd17f64 pid=1806 runtime=io.containerd.runc.v2 May 17 00:39:44.433346 env[1303]: time="2025-05-17T00:39:44.432548818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:44.433346 env[1303]: time="2025-05-17T00:39:44.432580565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:44.433346 env[1303]: time="2025-05-17T00:39:44.432592920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:44.433346 env[1303]: time="2025-05-17T00:39:44.432722491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de904f222363d1f80a9f8b8c473d10c2294a7b139c26c9907cce129c232547b8 pid=1825 runtime=io.containerd.runc.v2 May 17 00:39:44.441013 env[1303]: time="2025-05-17T00:39:44.439329465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:44.441013 env[1303]: time="2025-05-17T00:39:44.439364298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:44.441013 env[1303]: time="2025-05-17T00:39:44.439373938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:44.441013 env[1303]: time="2025-05-17T00:39:44.439481383Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffead730203708f7081bc4c650c7b5b0a78a54e384f4024fce9848ff491fd388 pid=1847 runtime=io.containerd.runc.v2 May 17 00:39:44.489037 env[1303]: time="2025-05-17T00:39:44.488979884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f473b161b9a230ce92284728970f920e7fb2214e78a02e9aebb7febfd17f64\"" May 17 00:39:44.489819 kubelet[1764]: E0517 00:39:44.489788 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:44.494530 env[1303]: time="2025-05-17T00:39:44.494488555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"de904f222363d1f80a9f8b8c473d10c2294a7b139c26c9907cce129c232547b8\"" May 17 00:39:44.498531 env[1303]: time="2025-05-17T00:39:44.498497070Z" level=info msg="CreateContainer within sandbox \"39f473b161b9a230ce92284728970f920e7fb2214e78a02e9aebb7febfd17f64\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:39:44.499359 kubelet[1764]: E0517 00:39:44.499332 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:44.501504 env[1303]: time="2025-05-17T00:39:44.501429828Z" level=info msg="CreateContainer within sandbox \"de904f222363d1f80a9f8b8c473d10c2294a7b139c26c9907cce129c232547b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:39:44.503795 env[1303]: time="2025-05-17T00:39:44.503755376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87bd9fa30544f816d9c59f1458471b66,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffead730203708f7081bc4c650c7b5b0a78a54e384f4024fce9848ff491fd388\"" May 17 00:39:44.504544 kubelet[1764]: E0517 00:39:44.504517 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:44.505929 env[1303]: time="2025-05-17T00:39:44.505903364Z" level=info msg="CreateContainer within sandbox \"ffead730203708f7081bc4c650c7b5b0a78a54e384f4024fce9848ff491fd388\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:39:44.531374 env[1303]: time="2025-05-17T00:39:44.531329519Z" level=info msg="CreateContainer within sandbox \"39f473b161b9a230ce92284728970f920e7fb2214e78a02e9aebb7febfd17f64\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"269ed9315941ba667c468ba0919ce235a6a3a2b6f863582b10925142f4bd0e3b\"" May 17 00:39:44.531911 env[1303]: time="2025-05-17T00:39:44.531869778Z" level=info msg="StartContainer for \"269ed9315941ba667c468ba0919ce235a6a3a2b6f863582b10925142f4bd0e3b\"" May 17 00:39:44.536470 env[1303]: time="2025-05-17T00:39:44.536440499Z" level=info msg="CreateContainer within sandbox \"de904f222363d1f80a9f8b8c473d10c2294a7b139c26c9907cce129c232547b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bb59056c0112df52bf178097afd64ee3ef37ef4e228dbefacc0995c0ac979e5\"" May 17 00:39:44.537067 env[1303]: time="2025-05-17T00:39:44.537022364Z" level=info msg="StartContainer for \"9bb59056c0112df52bf178097afd64ee3ef37ef4e228dbefacc0995c0ac979e5\"" May 17 00:39:44.539143 env[1303]: time="2025-05-17T00:39:44.539114124Z" level=info msg="CreateContainer within sandbox \"ffead730203708f7081bc4c650c7b5b0a78a54e384f4024fce9848ff491fd388\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14d21423a9b8b0e59c374de6a5fe7fd582a452bfec299845714923f7bce5d45b\"" May 17 00:39:44.539608 env[1303]: time="2025-05-17T00:39:44.539588645Z" level=info msg="StartContainer for \"14d21423a9b8b0e59c374de6a5fe7fd582a452bfec299845714923f7bce5d45b\"" May 17 00:39:44.550080 kubelet[1764]: I0517 00:39:44.549821 1764 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:44.550272 kubelet[1764]: E0517 00:39:44.550228 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.136:6443/api/v1/nodes\": dial tcp 10.0.0.136:6443: connect: connection refused" node="localhost" May 17 00:39:44.591461 env[1303]: time="2025-05-17T00:39:44.591360428Z" level=info msg="StartContainer for \"269ed9315941ba667c468ba0919ce235a6a3a2b6f863582b10925142f4bd0e3b\" returns successfully" May 17 00:39:44.598643 env[1303]: time="2025-05-17T00:39:44.598559439Z" level=info msg="StartContainer for \"9bb59056c0112df52bf178097afd64ee3ef37ef4e228dbefacc0995c0ac979e5\" returns successfully" May 17 00:39:44.620127 env[1303]: time="2025-05-17T00:39:44.615560731Z" level=info msg="StartContainer for \"14d21423a9b8b0e59c374de6a5fe7fd582a452bfec299845714923f7bce5d45b\" returns successfully" May 17 00:39:45.012077 kubelet[1764]: E0517 00:39:45.012040 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:45.013800 kubelet[1764]: E0517 00:39:45.013771 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:45.015565 kubelet[1764]: E0517 00:39:45.015539 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:45.703693 kubelet[1764]: E0517 00:39:45.703640 1764 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:39:45.977669 kubelet[1764]: I0517 00:39:45.977564 1764 apiserver.go:52] "Watching apiserver" May 17 00:39:45.986503 kubelet[1764]: I0517 00:39:45.986464 1764 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:39:46.017208 kubelet[1764]: E0517 00:39:46.017181 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:46.151207 kubelet[1764]: I0517 00:39:46.151189 1764 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:46.342629 kubelet[1764]: I0517 00:39:46.342603 1764 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:39:47.928470 kubelet[1764]: E0517 00:39:47.928422 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:48.019191 kubelet[1764]: E0517 00:39:48.019138 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:49.654141 kubelet[1764]: E0517 00:39:49.654086 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:50.022292 kubelet[1764]: E0517 00:39:50.022184 1764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:50.061904 systemd[1]: Reloading. May 17 00:39:50.119284 /usr/lib/systemd/system-generators/torcx-generator[2063]: time="2025-05-17T00:39:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:50.119315 /usr/lib/systemd/system-generators/torcx-generator[2063]: time="2025-05-17T00:39:50Z" level=info msg="torcx already run" May 17 00:39:50.191005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:50.191023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:50.210221 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:50.283531 systemd[1]: Stopping kubelet.service... May 17 00:39:50.303588 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:39:50.303963 systemd[1]: Stopped kubelet.service. May 17 00:39:50.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:50.304865 kernel: kauditd_printk_skb: 43 callbacks suppressed May 17 00:39:50.304919 kernel: audit: type=1131 audit(1747442390.302:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:50.306026 systemd[1]: Starting kubelet.service... May 17 00:39:50.398361 systemd[1]: Started kubelet.service. May 17 00:39:50.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:50.403123 kernel: audit: type=1130 audit(1747442390.397:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:50.436467 kubelet[2118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:50.436826 kubelet[2118]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:39:50.436826 kubelet[2118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:50.436936 kubelet[2118]: I0517 00:39:50.436880 2118 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:39:50.444987 kubelet[2118]: I0517 00:39:50.444944 2118 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:39:50.444987 kubelet[2118]: I0517 00:39:50.444974 2118 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:39:50.445469 kubelet[2118]: I0517 00:39:50.445449 2118 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:39:50.447472 kubelet[2118]: I0517 00:39:50.447459 2118 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:39:50.449314 kubelet[2118]: I0517 00:39:50.449290 2118 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:39:50.455549 kubelet[2118]: E0517 00:39:50.452783 2118 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:39:50.455549 kubelet[2118]: I0517 00:39:50.452827 2118 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:39:50.456801 kubelet[2118]: I0517 00:39:50.456760 2118 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:39:50.457174 kubelet[2118]: I0517 00:39:50.457156 2118 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:39:50.457282 kubelet[2118]: I0517 00:39:50.457250 2118 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:39:50.457451 kubelet[2118]: I0517 00:39:50.457279 2118 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:39:50.457451 kubelet[2118]: I0517 00:39:50.457444 2118 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:39:50.457451 kubelet[2118]: I0517 00:39:50.457452 2118 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:39:50.457650 kubelet[2118]: I0517 00:39:50.457474 2118 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:50.457650 kubelet[2118]: I0517 00:39:50.457553 2118 kubelet.go:408] "Attempting to sync node with API server" May 17 00:39:50.457650 kubelet[2118]: I0517 00:39:50.457566 2118 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:39:50.457650 kubelet[2118]: I0517 00:39:50.457589 2118 kubelet.go:314] "Adding apiserver pod source" May 17 00:39:50.457650 kubelet[2118]: I0517 00:39:50.457597 2118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:39:50.458250 kubelet[2118]: I0517 00:39:50.458230 2118 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:39:50.458584 kubelet[2118]: I0517 00:39:50.458567 2118 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:39:50.458922 kubelet[2118]: I0517 00:39:50.458896 2118 server.go:1274] "Started kubelet" May 17 00:39:50.463672 kubelet[2118]: I0517 00:39:50.463640 2118 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:39:50.462000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:50.463868 kubelet[2118]: I0517 00:39:50.463686 2118 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:39:50.463868 kubelet[2118]: I0517 00:39:50.463719 2118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:39:50.477234 kernel: audit: type=1400 audit(1747442390.462:222): avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:50.477348 kernel: audit: type=1401 audit(1747442390.462:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:50.477364 kernel: audit: type=1300 audit(1747442390.462:222): arch=c000003e syscall=188 success=no exit=-22 a0=c000d2c150 a1=c0005a5c98 a2=c000d2c120 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:50.477381 kernel: audit: type=1327 audit(1747442390.462:222): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:50.462000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:50.462000 audit[2118]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d2c150 a1=c0005a5c98 a2=c000d2c120 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:50.462000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.471919 2118 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.471924 2118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.472312 2118 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.472582 2118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.473706 2118 server.go:449] "Adding debug handlers to kubelet server" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.474171 2118 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:39:50.477510 kubelet[2118]: E0517 00:39:50.474424 2118 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.474833 2118 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.474952 2118 reconciler.go:26] "Reconciler: start to sync state" May 17 00:39:50.477510 kubelet[2118]: I0517 00:39:50.477419 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:39:50.478376 kubelet[2118]: I0517 00:39:50.478304 2118 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:39:50.462000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:50.485455 kubelet[2118]: I0517 00:39:50.481775 2118 factory.go:221] Registration of the containerd container factory successfully May 17 00:39:50.485455 kubelet[2118]: I0517 00:39:50.481801 2118 factory.go:221] Registration of the systemd container factory successfully May 17 00:39:50.485455 kubelet[2118]: I0517 00:39:50.484036 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:39:50.485455 kubelet[2118]: I0517 00:39:50.484059 2118 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:39:50.485455 kubelet[2118]: I0517 00:39:50.484091 2118 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:39:50.485455 kubelet[2118]: E0517 00:39:50.484156 2118 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:39:50.492985 kernel: audit: type=1400 audit(1747442390.462:223): avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:50.493068 kernel: audit: type=1401 audit(1747442390.462:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:50.493088 kernel: audit: type=1300 audit(1747442390.462:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000d2e420 a1=c0005a5cb0 a2=c000d2c1e0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:50.462000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:50.462000 audit[2118]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d2e420 a1=c0005a5cb0 a2=c000d2c1e0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:50.462000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:50.500053 kernel: audit: type=1327 audit(1747442390.462:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:50.523719 kubelet[2118]: I0517 00:39:50.523693 2118 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:39:50.523719 kubelet[2118]: I0517 00:39:50.523713 2118 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:39:50.523880 kubelet[2118]: I0517 00:39:50.523731 2118 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:50.523918 kubelet[2118]: I0517 00:39:50.523884 2118 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:39:50.523918 kubelet[2118]: I0517 00:39:50.523895 2118 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:39:50.523918 kubelet[2118]: I0517 00:39:50.523913 2118 policy_none.go:49] "None policy: Start" May 17 00:39:50.524389 kubelet[2118]: I0517 00:39:50.524378 2118 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:39:50.524465 kubelet[2118]: I0517 00:39:50.524450 2118 state_mem.go:35] "Initializing new in-memory state store" May 17 00:39:50.524564 kubelet[2118]: I0517 00:39:50.524554 2118 state_mem.go:75] "Updated machine memory state" May 17 00:39:50.525481 kubelet[2118]: I0517 00:39:50.525463 2118 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:39:50.524000 audit[2118]: AVC avc: denied { mac_admin } for pid=2118 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:39:50.524000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:39:50.524000 audit[2118]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009c47e0 a1=c000a29620 a2=c0009c47b0 a3=25 items=0 ppid=1 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:50.524000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:39:50.525736 kubelet[2118]: I0517 00:39:50.525516 2118 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:39:50.525736 kubelet[2118]: I0517 00:39:50.525632 2118 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:39:50.525736 kubelet[2118]: I0517 00:39:50.525642 2118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:39:50.525852 kubelet[2118]: I0517 00:39:50.525811 2118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:39:50.628756 kubelet[2118]: I0517 00:39:50.628726 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:39:50.676149 kubelet[2118]: I0517 00:39:50.676088 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.676286 kubelet[2118]: I0517 00:39:50.676156 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.676286 kubelet[2118]: I0517 00:39:50.676186 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:39:50.676286 kubelet[2118]: I0517 00:39:50.676210 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.676286 kubelet[2118]: I0517 00:39:50.676228 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:50.676286 kubelet[2118]: I0517 00:39:50.676247 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:50.676415 kubelet[2118]: I0517 00:39:50.676274 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87bd9fa30544f816d9c59f1458471b66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87bd9fa30544f816d9c59f1458471b66\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:50.676415 kubelet[2118]: I0517 00:39:50.676293 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.676415 kubelet[2118]: I0517 00:39:50.676310 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.777179 kubelet[2118]: E0517 00:39:50.777134 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 00:39:50.963613 kubelet[2118]: E0517 00:39:50.963148 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:39:50.963613 kubelet[2118]: E0517 00:39:50.963357 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:50.963848 kubelet[2118]: I0517 00:39:50.963730 2118 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 17 00:39:50.963848 kubelet[2118]: I0517 00:39:50.963789 2118 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:39:51.001884 kubelet[2118]: E0517 00:39:51.001858 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.078183 kubelet[2118]: E0517 00:39:51.078154 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.457908 kubelet[2118]: I0517 00:39:51.457859 2118 apiserver.go:52] "Watching apiserver" May 17 00:39:51.475741 kubelet[2118]: I0517 00:39:51.475709 2118 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:39:51.497599 kubelet[2118]: E0517 00:39:51.497575 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.497760 kubelet[2118]: E0517 00:39:51.497648 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.628029 kubelet[2118]: E0517 00:39:51.627982 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:39:51.628221 kubelet[2118]: E0517 00:39:51.628191 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.729927 kubelet[2118]: I0517 00:39:51.729778 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.72973938 podStartE2EDuration="2.72973938s" podCreationTimestamp="2025-05-17 00:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:51.727590317 +0000 UTC m=+1.326038129" watchObservedRunningTime="2025-05-17 00:39:51.72973938 +0000 UTC m=+1.328187182" May 17 00:39:51.810899 kubelet[2118]: I0517 00:39:51.810827 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.810797748 podStartE2EDuration="4.810797748s" podCreationTimestamp="2025-05-17 00:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:51.810636784 +0000 UTC m=+1.409084596" watchObservedRunningTime="2025-05-17 00:39:51.810797748 +0000 UTC m=+1.409245550" May 17 00:39:51.811127 kubelet[2118]: I0517 00:39:51.811007 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.810998933 podStartE2EDuration="1.810998933s" podCreationTimestamp="2025-05-17 00:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:51.797245685 +0000 UTC m=+1.395693487" watchObservedRunningTime="2025-05-17 00:39:51.810998933 +0000 UTC m=+1.409446735" May 17 00:39:52.498855 kubelet[2118]: E0517 00:39:52.498811 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.123203 update_engine[1294]: I0517 00:39:53.123164 1294 update_attempter.cc:509] Updating boot flags... May 17 00:39:53.499643 kubelet[2118]: E0517 00:39:53.499534 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.987248 kubelet[2118]: E0517 00:39:53.987216 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:56.249799 kubelet[2118]: I0517 00:39:56.249764 2118 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:39:56.250187 env[1303]: time="2025-05-17T00:39:56.250080434Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:39:56.250365 kubelet[2118]: I0517 00:39:56.250293 2118 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:39:56.815581 kubelet[2118]: I0517 00:39:56.815544 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b26c253-8966-4adf-b0a5-3ead74176437-lib-modules\") pod \"kube-proxy-hntkh\" (UID: \"5b26c253-8966-4adf-b0a5-3ead74176437\") " pod="kube-system/kube-proxy-hntkh" May 17 00:39:56.815581 kubelet[2118]: I0517 00:39:56.815578 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992c2\" (UniqueName: \"kubernetes.io/projected/5b26c253-8966-4adf-b0a5-3ead74176437-kube-api-access-992c2\") pod \"kube-proxy-hntkh\" (UID: \"5b26c253-8966-4adf-b0a5-3ead74176437\") " pod="kube-system/kube-proxy-hntkh" May 17 00:39:56.815581 kubelet[2118]: I0517 00:39:56.815595 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b26c253-8966-4adf-b0a5-3ead74176437-kube-proxy\") pod \"kube-proxy-hntkh\" (UID: \"5b26c253-8966-4adf-b0a5-3ead74176437\") " pod="kube-system/kube-proxy-hntkh" May 17 00:39:56.815581 kubelet[2118]: I0517 00:39:56.815607 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b26c253-8966-4adf-b0a5-3ead74176437-xtables-lock\") pod \"kube-proxy-hntkh\" (UID: \"5b26c253-8966-4adf-b0a5-3ead74176437\") " pod="kube-system/kube-proxy-hntkh" May 17 00:39:56.921497 kubelet[2118]: I0517 00:39:56.921453 2118 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:39:57.100344 kubelet[2118]: E0517 00:39:57.100218 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.101226 env[1303]: time="2025-05-17T00:39:57.101168379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hntkh,Uid:5b26c253-8966-4adf-b0a5-3ead74176437,Namespace:kube-system,Attempt:0,}" May 17 00:39:57.479513 env[1303]: time="2025-05-17T00:39:57.478897464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:57.479513 env[1303]: time="2025-05-17T00:39:57.479005777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:57.479513 env[1303]: time="2025-05-17T00:39:57.479029413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:57.479513 env[1303]: time="2025-05-17T00:39:57.479205400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75211b49c2932543ade280e376718c9f1559bc8615ecf1edc52794c5677f0ace pid=2187 runtime=io.containerd.runc.v2 May 17 00:39:57.520659 kubelet[2118]: I0517 00:39:57.520589 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9275f714-fdd8-47c2-a6ff-5b3e48f68242-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-ljbgn\" (UID: \"9275f714-fdd8-47c2-a6ff-5b3e48f68242\") " pod="tigera-operator/tigera-operator-7c5755cdcb-ljbgn" May 17 00:39:57.520659 kubelet[2118]: I0517 00:39:57.520619 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgtg\" (UniqueName: \"kubernetes.io/projected/9275f714-fdd8-47c2-a6ff-5b3e48f68242-kube-api-access-fdgtg\") pod \"tigera-operator-7c5755cdcb-ljbgn\" (UID: \"9275f714-fdd8-47c2-a6ff-5b3e48f68242\") " pod="tigera-operator/tigera-operator-7c5755cdcb-ljbgn" May 17 00:39:57.522254 env[1303]: time="2025-05-17T00:39:57.522207731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hntkh,Uid:5b26c253-8966-4adf-b0a5-3ead74176437,Namespace:kube-system,Attempt:0,} returns sandbox id \"75211b49c2932543ade280e376718c9f1559bc8615ecf1edc52794c5677f0ace\"" May 17 00:39:57.522949 kubelet[2118]: E0517 00:39:57.522794 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.524753 env[1303]: time="2025-05-17T00:39:57.524723502Z" level=info msg="CreateContainer within sandbox \"75211b49c2932543ade280e376718c9f1559bc8615ecf1edc52794c5677f0ace\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:39:57.540455 env[1303]: time="2025-05-17T00:39:57.540411979Z" level=info msg="CreateContainer within sandbox \"75211b49c2932543ade280e376718c9f1559bc8615ecf1edc52794c5677f0ace\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03ed2c2460c1d0b364396a2c461f089bf97188d3a5b2bd5a8a01270a937b068c\"" May 17 00:39:57.541065 env[1303]: time="2025-05-17T00:39:57.541014334Z" level=info msg="StartContainer for \"03ed2c2460c1d0b364396a2c461f089bf97188d3a5b2bd5a8a01270a937b068c\"" May 17 00:39:57.581284 env[1303]: time="2025-05-17T00:39:57.581225833Z" level=info msg="StartContainer for \"03ed2c2460c1d0b364396a2c461f089bf97188d3a5b2bd5a8a01270a937b068c\" returns successfully" May 17 00:39:57.675143 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:39:57.675242 kernel: audit: type=1325 audit(1747442397.664:225): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.675272 kernel: audit: type=1300 audit(1747442397.664:225): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff10a270a0 a2=0 a3=7fff10a2708c items=0 ppid=2242 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.664000 audit[2294]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.664000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff10a270a0 a2=0 a3=7fff10a2708c items=0 ppid=2242 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:39:57.678369 kernel: audit: type=1327 audit(1747442397.664:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:39:57.665000 audit[2296]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.680726 kernel: audit: type=1325 audit(1747442397.665:226): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.680773 kernel: audit: type=1300 audit(1747442397.665:226): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee0a2f530 a2=0 a3=7ffee0a2f51c items=0 ppid=2242 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.665000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee0a2f530 a2=0 a3=7ffee0a2f51c items=0 ppid=2242 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:39:57.687559 kernel: audit: type=1327 audit(1747442397.665:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:39:57.687600 kernel: audit: type=1325 audit(1747442397.666:227): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.666000 audit[2297]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.689806 kernel: audit: type=1300 audit(1747442397.666:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc92f28d80 a2=0 a3=7ffc92f28d6c items=0 ppid=2242 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.666000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc92f28d80 a2=0 a3=7ffc92f28d6c items=0 ppid=2242 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.694299 kernel: audit: type=1327 audit(1747442397.666:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:39:57.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:39:57.669000 audit[2295]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.698734 kernel: audit: type=1325 audit(1747442397.669:228): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.669000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6978ae70 a2=0 a3=7ffc6978ae5c items=0 ppid=2242 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:39:57.670000 audit[2298]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.670000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd291bbd0 a2=0 a3=7fffd291bbbc items=0 ppid=2242 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:39:57.671000 audit[2299]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.671000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd87352b10 a2=0 a3=7ffd87352afc items=0 ppid=2242 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:39:57.768000 audit[2300]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.768000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe241c3240 a2=0 a3=7ffe241c322c items=0 ppid=2242 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:39:57.771000 audit[2302]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.771000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfa27bc40 a2=0 a3=7ffdfa27bc2c items=0 ppid=2242 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:39:57.774000 audit[2305]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.774000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd9f5a2370 a2=0 a3=7ffd9f5a235c items=0 ppid=2242 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.776040 env[1303]: time="2025-05-17T00:39:57.775928379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-ljbgn,Uid:9275f714-fdd8-47c2-a6ff-5b3e48f68242,Namespace:tigera-operator,Attempt:0,}" May 17 00:39:57.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:39:57.775000 audit[2306]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.775000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdde9f3f90 a2=0 a3=7ffdde9f3f7c items=0 ppid=2242 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:39:57.778000 audit[2308]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.778000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc20d9610 a2=0 a3=7ffdc20d95fc items=0 ppid=2242 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:39:57.779000 audit[2309]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.779000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0780ced0 a2=0 a3=7ffd0780cebc items=0 ppid=2242 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:39:57.781000 audit[2311]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.781000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe216a8d0 a2=0 a3=7fffe216a8bc items=0 ppid=2242 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:39:57.784000 audit[2314]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.784000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe20d597b0 a2=0 a3=7ffe20d5979c items=0 ppid=2242 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:39:57.785000 audit[2315]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.785000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd73ae2e0 a2=0 a3=7ffdd73ae2cc items=0 ppid=2242 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:39:57.787000 audit[2317]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.787000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc49193120 a2=0 a3=7ffc4919310c items=0 ppid=2242 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:39:57.789000 audit[2318]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.789000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd60ddf5e0 a2=0 a3=7ffd60ddf5cc items=0 ppid=2242 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:39:57.792000 audit[2332]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.792000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe07b408e0 a2=0 a3=7ffe07b408cc items=0 ppid=2242 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:39:57.794812 env[1303]: time="2025-05-17T00:39:57.794052078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:57.794812 env[1303]: time="2025-05-17T00:39:57.794085955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:57.794812 env[1303]: time="2025-05-17T00:39:57.794095584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:57.794812 env[1303]: time="2025-05-17T00:39:57.794210329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2052e7ba80485089532c2d2b4000dd70de476b5f6e7fb18975ce03216251ec0d pid=2327 runtime=io.containerd.runc.v2 May 17 00:39:57.795000 audit[2341]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.795000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe04dade00 a2=0 a3=7ffe04daddec items=0 ppid=2242 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:39:57.799000 audit[2349]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.799000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe5bcb950 a2=0 a3=7fffe5bcb93c items=0 ppid=2242 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:39:57.800000 audit[2351]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.800000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc06913e40 a2=0 a3=7ffc06913e2c items=0 ppid=2242 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:39:57.802000 audit[2355]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.802000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffdbcb35a0 a2=0 a3=7fffdbcb358c items=0 ppid=2242 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:39:57.805000 audit[2358]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.805000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff9585360 a2=0 a3=7ffff958534c items=0 ppid=2242 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:39:57.806000 audit[2359]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.806000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff33ed71f0 a2=0 a3=7fff33ed71dc items=0 ppid=2242 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:39:57.809000 audit[2363]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:39:57.809000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe25215180 a2=0 a3=7ffe2521516c items=0 ppid=2242 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:39:57.837000 audit[2374]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:39:57.837000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd469f9600 a2=0 a3=7ffd469f95ec items=0 ppid=2242 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:39:57.842175 env[1303]: time="2025-05-17T00:39:57.842138255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-ljbgn,Uid:9275f714-fdd8-47c2-a6ff-5b3e48f68242,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2052e7ba80485089532c2d2b4000dd70de476b5f6e7fb18975ce03216251ec0d\"" May 17 00:39:57.843511 env[1303]: time="2025-05-17T00:39:57.843474785Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:39:57.847000 audit[2374]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:39:57.847000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd469f9600 a2=0 a3=7ffd469f95ec items=0 ppid=2242 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:39:57.849000 audit[2386]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.849000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc50fb4280 a2=0 a3=7ffc50fb426c items=0 ppid=2242 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:39:57.851000 audit[2388]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.851000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcb7a565e0 a2=0 a3=7ffcb7a565cc items=0 ppid=2242 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:39:57.855000 audit[2391]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.855000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffea80b7950 a2=0 a3=7ffea80b793c items=0 ppid=2242 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:39:57.855000 audit[2392]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.855000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5700ab50 a2=0 a3=7ffd5700ab3c items=0 ppid=2242 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:39:57.858000 audit[2394]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.858000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff48b5cf50 a2=0 a3=7fff48b5cf3c items=0 ppid=2242 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:39:57.859000 audit[2395]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.859000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd88211430 a2=0 a3=7ffd8821141c items=0 ppid=2242 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:39:57.861000 audit[2397]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.861000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf248be40 a2=0 a3=7ffdf248be2c items=0 ppid=2242 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:39:57.864000 audit[2400]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.864000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc5bc251d0 a2=0 a3=7ffc5bc251bc items=0 ppid=2242 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:39:57.865000 audit[2401]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.865000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe49aab430 a2=0 a3=7ffe49aab41c items=0 ppid=2242 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:39:57.866000 audit[2403]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.866000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1b7f8240 a2=0 a3=7fff1b7f822c items=0 ppid=2242 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:39:57.868000 audit[2404]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.868000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe697fd6b0 a2=0 a3=7ffe697fd69c items=0 ppid=2242 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:39:57.870000 audit[2406]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.870000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc108d5ab0 a2=0 a3=7ffc108d5a9c items=0 ppid=2242 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:39:57.873000 audit[2409]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.873000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa3b4aa70 a2=0 a3=7fffa3b4aa5c items=0 ppid=2242 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:39:57.876000 audit[2412]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.876000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4d873250 a2=0 a3=7fff4d87323c items=0 ppid=2242 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:39:57.877000 audit[2413]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.877000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9c13fc20 a2=0 a3=7ffd9c13fc0c items=0 ppid=2242 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:39:57.879000 audit[2415]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.879000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffec7ed1860 a2=0 a3=7ffec7ed184c items=0 ppid=2242 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:39:57.882000 audit[2418]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.882000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd21d5cf00 a2=0 a3=7ffd21d5ceec items=0 ppid=2242 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:39:57.883000 audit[2419]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.883000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4d3a0b30 a2=0 a3=7ffe4d3a0b1c items=0 ppid=2242 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:39:57.885000 audit[2421]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.885000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd9071bf30 a2=0 a3=7ffd9071bf1c items=0 ppid=2242 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:39:57.886000 audit[2422]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.886000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7a45b160 a2=0 a3=7ffe7a45b14c items=0 ppid=2242 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:39:57.888000 audit[2424]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.888000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf999a7c0 a2=0 a3=7ffcf999a7ac items=0 ppid=2242 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:39:57.891000 audit[2427]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:39:57.891000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc704ca660 a2=0 a3=7ffc704ca64c items=0 ppid=2242 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:39:57.894000 audit[2429]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:39:57.894000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd4455e380 a2=0 a3=7ffd4455e36c items=0 ppid=2242 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.894000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:39:57.894000 audit[2429]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:39:57.894000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd4455e380 a2=0 a3=7ffd4455e36c items=0 ppid=2242 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:57.894000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:39:57.927978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount498203986.mount: Deactivated successfully. May 17 00:39:58.506899 kubelet[2118]: E0517 00:39:58.506871 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:58.514344 kubelet[2118]: I0517 00:39:58.514295 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hntkh" podStartSLOduration=2.51427586 podStartE2EDuration="2.51427586s" podCreationTimestamp="2025-05-17 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:58.513778704 +0000 UTC m=+8.112226506" watchObservedRunningTime="2025-05-17 00:39:58.51427586 +0000 UTC m=+8.112723672" May 17 00:39:59.908912 kubelet[2118]: E0517 00:39:59.908881 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:00.509790 kubelet[2118]: E0517 00:40:00.509725 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:01.043282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176493605.mount: Deactivated successfully. May 17 00:40:02.134099 env[1303]: time="2025-05-17T00:40:02.134038390Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:02.157367 env[1303]: time="2025-05-17T00:40:02.157334725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:02.164075 env[1303]: time="2025-05-17T00:40:02.164038428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:02.171560 env[1303]: time="2025-05-17T00:40:02.171514541Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:02.172025 env[1303]: time="2025-05-17T00:40:02.171992980Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:40:02.173786 env[1303]: time="2025-05-17T00:40:02.173755825Z" level=info msg="CreateContainer within sandbox \"2052e7ba80485089532c2d2b4000dd70de476b5f6e7fb18975ce03216251ec0d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:40:02.205208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684954299.mount: Deactivated successfully. May 17 00:40:02.245500 env[1303]: time="2025-05-17T00:40:02.245424308Z" level=info msg="CreateContainer within sandbox \"2052e7ba80485089532c2d2b4000dd70de476b5f6e7fb18975ce03216251ec0d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd17f7eff0b7ddb201fcb58f2a47c7fbca91d3c673d6d7fe12aaceab27aecee5\"" May 17 00:40:02.246159 env[1303]: time="2025-05-17T00:40:02.246062176Z" level=info msg="StartContainer for \"dd17f7eff0b7ddb201fcb58f2a47c7fbca91d3c673d6d7fe12aaceab27aecee5\"" May 17 00:40:02.950471 env[1303]: time="2025-05-17T00:40:02.950411001Z" level=info msg="StartContainer for \"dd17f7eff0b7ddb201fcb58f2a47c7fbca91d3c673d6d7fe12aaceab27aecee5\" returns successfully" May 17 00:40:03.504881 kubelet[2118]: E0517 00:40:03.504837 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:03.995047 kubelet[2118]: E0517 00:40:03.995000 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:04.016246 kubelet[2118]: I0517 00:40:04.016186 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-ljbgn" podStartSLOduration=2.686522665 podStartE2EDuration="7.016167802s" podCreationTimestamp="2025-05-17 00:39:57 +0000 UTC" firstStartedPulling="2025-05-17 00:39:57.842998227 +0000 UTC m=+7.441446029" lastFinishedPulling="2025-05-17 00:40:02.172643364 +0000 UTC m=+11.771091166" observedRunningTime="2025-05-17 00:40:03.973910268 +0000 UTC m=+13.572358080" watchObservedRunningTime="2025-05-17 00:40:04.016167802 +0000 UTC m=+13.614615604" May 17 00:40:08.156282 sudo[1471]: pam_unix(sudo:session): session closed for user root May 17 00:40:08.156000 audit[1471]: USER_END pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:40:08.157858 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:40:08.157919 kernel: audit: type=1106 audit(1747442408.156:276): pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:40:08.156000 audit[1471]: CRED_DISP pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:40:08.167379 kernel: audit: type=1104 audit(1747442408.156:277): pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:40:08.176251 sshd[1465]: pam_unix(sshd:session): session closed for user core May 17 00:40:08.179000 audit[1465]: USER_END pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:08.181313 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. May 17 00:40:08.182955 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:44504.service: Deactivated successfully. May 17 00:40:08.183737 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:40:08.185236 kernel: audit: type=1106 audit(1747442408.179:278): pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:08.185293 kernel: audit: type=1104 audit(1747442408.179:279): pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:08.179000 audit[1465]: CRED_DISP pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:08.185654 systemd-logind[1293]: Removed session 7. May 17 00:40:08.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:44504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.199155 kernel: audit: type=1131 audit(1747442408.179:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.136:22-10.0.0.1:44504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.331000 audit[2521]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.331000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed643ccc0 a2=0 a3=7ffed643ccac items=0 ppid=2242 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.340281 kernel: audit: type=1325 audit(1747442409.331:281): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.340346 kernel: audit: type=1300 audit(1747442409.331:281): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed643ccc0 a2=0 a3=7ffed643ccac items=0 ppid=2242 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:09.343040 kernel: audit: type=1327 audit(1747442409.331:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:09.344000 audit[2521]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.344000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed643ccc0 a2=0 a3=0 items=0 ppid=2242 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.355537 kernel: audit: type=1325 audit(1747442409.344:282): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.355585 kernel: audit: type=1300 audit(1747442409.344:282): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed643ccc0 a2=0 a3=0 items=0 ppid=2242 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:09.420000 audit[2523]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.420000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe7ed48c00 a2=0 a3=7ffe7ed48bec items=0 ppid=2242 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:09.424000 audit[2523]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:09.424000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7ed48c00 a2=0 a3=0 items=0 ppid=2242 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:11.652000 audit[2526]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:11.652000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdbab4d750 a2=0 a3=7ffdbab4d73c items=0 ppid=2242 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:11.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:11.657000 audit[2526]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:11.657000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdbab4d750 a2=0 a3=0 items=0 ppid=2242 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:11.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:11.670000 audit[2528]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:11.670000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd4d7f3b70 a2=0 a3=7ffd4d7f3b5c items=0 ppid=2242 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:11.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:11.675000 audit[2528]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:11.675000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4d7f3b70 a2=0 a3=0 items=0 ppid=2242 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:11.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:12.317642 kubelet[2118]: I0517 00:40:12.317586 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77852e75-0911-4802-9407-212e127d46b1-tigera-ca-bundle\") pod \"calico-typha-785744cb6b-tdvng\" (UID: \"77852e75-0911-4802-9407-212e127d46b1\") " pod="calico-system/calico-typha-785744cb6b-tdvng" May 17 00:40:12.317642 kubelet[2118]: I0517 00:40:12.317634 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/77852e75-0911-4802-9407-212e127d46b1-typha-certs\") pod \"calico-typha-785744cb6b-tdvng\" (UID: \"77852e75-0911-4802-9407-212e127d46b1\") " pod="calico-system/calico-typha-785744cb6b-tdvng" May 17 00:40:12.317642 kubelet[2118]: I0517 00:40:12.317652 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjsb\" (UniqueName: \"kubernetes.io/projected/77852e75-0911-4802-9407-212e127d46b1-kube-api-access-htjsb\") pod \"calico-typha-785744cb6b-tdvng\" (UID: \"77852e75-0911-4802-9407-212e127d46b1\") " pod="calico-system/calico-typha-785744cb6b-tdvng" May 17 00:40:12.519015 kubelet[2118]: I0517 00:40:12.518966 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-flexvol-driver-host\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519015 kubelet[2118]: I0517 00:40:12.519008 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-lib-modules\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519015 kubelet[2118]: I0517 00:40:12.519026 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a867dc72-badd-47bb-bb08-4ef26005812c-node-certs\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519286 kubelet[2118]: I0517 00:40:12.519039 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-var-run-calico\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519286 kubelet[2118]: I0517 00:40:12.519100 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-cni-bin-dir\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519286 kubelet[2118]: I0517 00:40:12.519144 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a867dc72-badd-47bb-bb08-4ef26005812c-tigera-ca-bundle\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519286 kubelet[2118]: I0517 00:40:12.519172 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-cni-log-dir\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519286 kubelet[2118]: I0517 00:40:12.519190 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsz4\" (UniqueName: \"kubernetes.io/projected/a867dc72-badd-47bb-bb08-4ef26005812c-kube-api-access-pgsz4\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519479 kubelet[2118]: I0517 00:40:12.519205 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-xtables-lock\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519479 kubelet[2118]: I0517 00:40:12.519219 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-cni-net-dir\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519479 kubelet[2118]: I0517 00:40:12.519234 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-policysync\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.519479 kubelet[2118]: I0517 00:40:12.519248 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a867dc72-badd-47bb-bb08-4ef26005812c-var-lib-calico\") pod \"calico-node-gv478\" (UID: \"a867dc72-badd-47bb-bb08-4ef26005812c\") " pod="calico-system/calico-node-gv478" May 17 00:40:12.585243 kubelet[2118]: E0517 00:40:12.585132 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:12.585951 env[1303]: time="2025-05-17T00:40:12.585898594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785744cb6b-tdvng,Uid:77852e75-0911-4802-9407-212e127d46b1,Namespace:calico-system,Attempt:0,}" May 17 00:40:12.621538 kubelet[2118]: E0517 00:40:12.621490 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.621538 kubelet[2118]: W0517 00:40:12.621516 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.621538 kubelet[2118]: E0517 00:40:12.621546 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.623272 kubelet[2118]: E0517 00:40:12.623231 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.623272 kubelet[2118]: W0517 00:40:12.623263 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.623272 kubelet[2118]: E0517 00:40:12.623282 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.633820 kubelet[2118]: E0517 00:40:12.633772 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:12.685000 audit[2541]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:12.685000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc82ee4790 a2=0 a3=7ffc82ee477c items=0 ppid=2242 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:12.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:12.692000 audit[2541]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:12.692000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc82ee4790 a2=0 a3=0 items=0 ppid=2242 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:12.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:12.712463 kubelet[2118]: E0517 00:40:12.712427 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.712463 kubelet[2118]: W0517 00:40:12.712455 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.712605 kubelet[2118]: E0517 00:40:12.712479 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.712817 kubelet[2118]: E0517 00:40:12.712786 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.712817 kubelet[2118]: W0517 00:40:12.712808 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713035 kubelet[2118]: E0517 00:40:12.712831 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713035 kubelet[2118]: E0517 00:40:12.713021 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713035 kubelet[2118]: W0517 00:40:12.713028 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713157 kubelet[2118]: E0517 00:40:12.713038 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713207 kubelet[2118]: E0517 00:40:12.713175 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713207 kubelet[2118]: W0517 00:40:12.713182 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713207 kubelet[2118]: E0517 00:40:12.713189 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713331 kubelet[2118]: E0517 00:40:12.713308 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713331 kubelet[2118]: W0517 00:40:12.713330 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713416 kubelet[2118]: E0517 00:40:12.713338 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713474 kubelet[2118]: E0517 00:40:12.713459 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713474 kubelet[2118]: W0517 00:40:12.713470 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713544 kubelet[2118]: E0517 00:40:12.713479 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713615 kubelet[2118]: E0517 00:40:12.713590 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713615 kubelet[2118]: W0517 00:40:12.713610 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713683 kubelet[2118]: E0517 00:40:12.713617 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713742 kubelet[2118]: E0517 00:40:12.713723 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713742 kubelet[2118]: W0517 00:40:12.713735 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713742 kubelet[2118]: E0517 00:40:12.713741 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.713956 kubelet[2118]: E0517 00:40:12.713882 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.713956 kubelet[2118]: W0517 00:40:12.713890 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.713956 kubelet[2118]: E0517 00:40:12.713898 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714054 kubelet[2118]: E0517 00:40:12.713994 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714054 kubelet[2118]: W0517 00:40:12.714001 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714054 kubelet[2118]: E0517 00:40:12.714007 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714172 kubelet[2118]: E0517 00:40:12.714117 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714172 kubelet[2118]: W0517 00:40:12.714123 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714172 kubelet[2118]: E0517 00:40:12.714130 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714267 kubelet[2118]: E0517 00:40:12.714229 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714267 kubelet[2118]: W0517 00:40:12.714236 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714267 kubelet[2118]: E0517 00:40:12.714242 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714382 kubelet[2118]: E0517 00:40:12.714369 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714382 kubelet[2118]: W0517 00:40:12.714378 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714382 kubelet[2118]: E0517 00:40:12.714384 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714538 kubelet[2118]: E0517 00:40:12.714511 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714538 kubelet[2118]: W0517 00:40:12.714528 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714538 kubelet[2118]: E0517 00:40:12.714534 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714672 kubelet[2118]: E0517 00:40:12.714634 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714672 kubelet[2118]: W0517 00:40:12.714640 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714672 kubelet[2118]: E0517 00:40:12.714646 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714763 kubelet[2118]: E0517 00:40:12.714744 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714763 kubelet[2118]: W0517 00:40:12.714751 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714763 kubelet[2118]: E0517 00:40:12.714757 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.714910 kubelet[2118]: E0517 00:40:12.714890 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.714910 kubelet[2118]: W0517 00:40:12.714902 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.714910 kubelet[2118]: E0517 00:40:12.714911 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.715065 kubelet[2118]: E0517 00:40:12.715032 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.715065 kubelet[2118]: W0517 00:40:12.715039 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.715065 kubelet[2118]: E0517 00:40:12.715047 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.715177 kubelet[2118]: E0517 00:40:12.715159 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.715177 kubelet[2118]: W0517 00:40:12.715166 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.715177 kubelet[2118]: E0517 00:40:12.715172 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.715299 kubelet[2118]: E0517 00:40:12.715270 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.715299 kubelet[2118]: W0517 00:40:12.715286 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.715299 kubelet[2118]: E0517 00:40:12.715293 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.716558 kubelet[2118]: E0517 00:40:12.716520 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.716558 kubelet[2118]: W0517 00:40:12.716535 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.716558 kubelet[2118]: E0517 00:40:12.716545 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.720809 kubelet[2118]: E0517 00:40:12.720786 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.720809 kubelet[2118]: W0517 00:40:12.720800 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.720809 kubelet[2118]: E0517 00:40:12.720813 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.720996 kubelet[2118]: I0517 00:40:12.720844 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/df42f368-756b-4bd3-8365-0200df6a0484-socket-dir\") pod \"csi-node-driver-t72dz\" (UID: \"df42f368-756b-4bd3-8365-0200df6a0484\") " pod="calico-system/csi-node-driver-t72dz" May 17 00:40:12.721088 kubelet[2118]: E0517 00:40:12.721072 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.721088 kubelet[2118]: W0517 00:40:12.721083 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.721174 kubelet[2118]: E0517 00:40:12.721095 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.721174 kubelet[2118]: I0517 00:40:12.721119 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/df42f368-756b-4bd3-8365-0200df6a0484-varrun\") pod \"csi-node-driver-t72dz\" (UID: \"df42f368-756b-4bd3-8365-0200df6a0484\") " pod="calico-system/csi-node-driver-t72dz" May 17 00:40:12.721304 kubelet[2118]: E0517 00:40:12.721284 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.721304 kubelet[2118]: W0517 00:40:12.721299 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.721391 kubelet[2118]: E0517 00:40:12.721313 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.721473 kubelet[2118]: E0517 00:40:12.721459 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.721473 kubelet[2118]: W0517 00:40:12.721468 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.721536 kubelet[2118]: E0517 00:40:12.721480 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.721720 kubelet[2118]: E0517 00:40:12.721690 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.721720 kubelet[2118]: W0517 00:40:12.721711 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.721800 kubelet[2118]: E0517 00:40:12.721734 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.721800 kubelet[2118]: I0517 00:40:12.721773 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvtx\" (UniqueName: \"kubernetes.io/projected/df42f368-756b-4bd3-8365-0200df6a0484-kube-api-access-xsvtx\") pod \"csi-node-driver-t72dz\" (UID: \"df42f368-756b-4bd3-8365-0200df6a0484\") " pod="calico-system/csi-node-driver-t72dz" May 17 00:40:12.722008 kubelet[2118]: E0517 00:40:12.721980 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722008 kubelet[2118]: W0517 00:40:12.721997 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722079 kubelet[2118]: E0517 00:40:12.722011 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.722167 kubelet[2118]: E0517 00:40:12.722154 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722167 kubelet[2118]: W0517 00:40:12.722164 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722228 kubelet[2118]: E0517 00:40:12.722174 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.722306 kubelet[2118]: E0517 00:40:12.722293 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722306 kubelet[2118]: W0517 00:40:12.722302 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722373 kubelet[2118]: E0517 00:40:12.722313 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.722373 kubelet[2118]: I0517 00:40:12.722329 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df42f368-756b-4bd3-8365-0200df6a0484-registration-dir\") pod \"csi-node-driver-t72dz\" (UID: \"df42f368-756b-4bd3-8365-0200df6a0484\") " pod="calico-system/csi-node-driver-t72dz" May 17 00:40:12.722512 kubelet[2118]: E0517 00:40:12.722498 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722512 kubelet[2118]: W0517 00:40:12.722510 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722588 kubelet[2118]: E0517 00:40:12.722520 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.722588 kubelet[2118]: I0517 00:40:12.722533 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df42f368-756b-4bd3-8365-0200df6a0484-kubelet-dir\") pod \"csi-node-driver-t72dz\" (UID: \"df42f368-756b-4bd3-8365-0200df6a0484\") " pod="calico-system/csi-node-driver-t72dz" May 17 00:40:12.722915 kubelet[2118]: E0517 00:40:12.722684 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722915 kubelet[2118]: W0517 00:40:12.722696 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722915 kubelet[2118]: E0517 00:40:12.722708 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.722915 kubelet[2118]: E0517 00:40:12.722865 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.722915 kubelet[2118]: W0517 00:40:12.722872 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.722915 kubelet[2118]: E0517 00:40:12.722883 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.723227 kubelet[2118]: E0517 00:40:12.723072 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.723227 kubelet[2118]: W0517 00:40:12.723080 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.723227 kubelet[2118]: E0517 00:40:12.723092 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.723328 kubelet[2118]: E0517 00:40:12.723247 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.723328 kubelet[2118]: W0517 00:40:12.723255 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.723328 kubelet[2118]: E0517 00:40:12.723266 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.723445 kubelet[2118]: E0517 00:40:12.723431 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.723445 kubelet[2118]: W0517 00:40:12.723441 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.723559 kubelet[2118]: E0517 00:40:12.723448 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.723601 kubelet[2118]: E0517 00:40:12.723591 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.723601 kubelet[2118]: W0517 00:40:12.723599 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.723651 kubelet[2118]: E0517 00:40:12.723605 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.746805 env[1303]: time="2025-05-17T00:40:12.746712426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:12.746805 env[1303]: time="2025-05-17T00:40:12.746777170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:12.746805 env[1303]: time="2025-05-17T00:40:12.746794513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:12.747299 env[1303]: time="2025-05-17T00:40:12.747220260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24c7a513bbcf8258a0f5373f05ba0f0df8826f7979bcdea0722ee2f9a13be7ac pid=2587 runtime=io.containerd.runc.v2 May 17 00:40:12.783587 env[1303]: time="2025-05-17T00:40:12.783525922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv478,Uid:a867dc72-badd-47bb-bb08-4ef26005812c,Namespace:calico-system,Attempt:0,}" May 17 00:40:12.800304 env[1303]: time="2025-05-17T00:40:12.800252395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785744cb6b-tdvng,Uid:77852e75-0911-4802-9407-212e127d46b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"24c7a513bbcf8258a0f5373f05ba0f0df8826f7979bcdea0722ee2f9a13be7ac\"" May 17 00:40:12.801136 kubelet[2118]: E0517 00:40:12.800861 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:12.802304 env[1303]: time="2025-05-17T00:40:12.801821588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:40:12.823729 kubelet[2118]: E0517 00:40:12.823691 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.823729 kubelet[2118]: W0517 00:40:12.823719 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.823729 kubelet[2118]: E0517 00:40:12.823739 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.823962 kubelet[2118]: E0517 00:40:12.823948 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.823962 kubelet[2118]: W0517 00:40:12.823960 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.824011 kubelet[2118]: E0517 00:40:12.823973 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.824235 kubelet[2118]: E0517 00:40:12.824213 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.824235 kubelet[2118]: W0517 00:40:12.824234 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.824326 kubelet[2118]: E0517 00:40:12.824253 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.824535 kubelet[2118]: E0517 00:40:12.824514 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.824535 kubelet[2118]: W0517 00:40:12.824529 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.824535 kubelet[2118]: E0517 00:40:12.824537 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.824734 kubelet[2118]: E0517 00:40:12.824718 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.824734 kubelet[2118]: W0517 00:40:12.824732 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.824819 kubelet[2118]: E0517 00:40:12.824748 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.824980 kubelet[2118]: E0517 00:40:12.824961 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.824980 kubelet[2118]: W0517 00:40:12.824975 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.825081 kubelet[2118]: E0517 00:40:12.824991 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.825458 kubelet[2118]: E0517 00:40:12.825242 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.825458 kubelet[2118]: W0517 00:40:12.825262 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.825458 kubelet[2118]: E0517 00:40:12.825302 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.825605 kubelet[2118]: E0517 00:40:12.825584 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.825605 kubelet[2118]: W0517 00:40:12.825596 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.825763 kubelet[2118]: E0517 00:40:12.825631 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.825789 kubelet[2118]: E0517 00:40:12.825772 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.825789 kubelet[2118]: W0517 00:40:12.825779 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.825834 kubelet[2118]: E0517 00:40:12.825805 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.825937 kubelet[2118]: E0517 00:40:12.825922 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.825937 kubelet[2118]: W0517 00:40:12.825932 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.825987 kubelet[2118]: E0517 00:40:12.825945 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826082 kubelet[2118]: E0517 00:40:12.826067 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826082 kubelet[2118]: W0517 00:40:12.826076 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826180 kubelet[2118]: E0517 00:40:12.826087 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826220 kubelet[2118]: E0517 00:40:12.826209 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826220 kubelet[2118]: W0517 00:40:12.826218 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826267 kubelet[2118]: E0517 00:40:12.826229 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826404 kubelet[2118]: E0517 00:40:12.826393 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826428 kubelet[2118]: W0517 00:40:12.826405 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826428 kubelet[2118]: E0517 00:40:12.826419 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826591 kubelet[2118]: E0517 00:40:12.826582 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826615 kubelet[2118]: W0517 00:40:12.826591 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826615 kubelet[2118]: E0517 00:40:12.826606 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826770 kubelet[2118]: E0517 00:40:12.826760 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826770 kubelet[2118]: W0517 00:40:12.826769 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826839 kubelet[2118]: E0517 00:40:12.826793 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.826936 kubelet[2118]: E0517 00:40:12.826925 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.826963 kubelet[2118]: W0517 00:40:12.826934 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.826986 kubelet[2118]: E0517 00:40:12.826957 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.827088 kubelet[2118]: E0517 00:40:12.827079 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.827126 kubelet[2118]: W0517 00:40:12.827088 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.827156 kubelet[2118]: E0517 00:40:12.827125 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.827259 kubelet[2118]: E0517 00:40:12.827250 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.827283 kubelet[2118]: W0517 00:40:12.827259 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.827283 kubelet[2118]: E0517 00:40:12.827272 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.827449 kubelet[2118]: E0517 00:40:12.827437 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.827449 kubelet[2118]: W0517 00:40:12.827446 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.827511 kubelet[2118]: E0517 00:40:12.827459 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.827648 kubelet[2118]: E0517 00:40:12.827634 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.827648 kubelet[2118]: W0517 00:40:12.827644 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.827696 kubelet[2118]: E0517 00:40:12.827655 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.827805 kubelet[2118]: E0517 00:40:12.827786 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.827805 kubelet[2118]: W0517 00:40:12.827801 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.827905 kubelet[2118]: E0517 00:40:12.827812 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.828081 kubelet[2118]: E0517 00:40:12.828025 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.828081 kubelet[2118]: W0517 00:40:12.828039 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.828081 kubelet[2118]: E0517 00:40:12.828049 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.828255 kubelet[2118]: E0517 00:40:12.828240 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.828255 kubelet[2118]: W0517 00:40:12.828250 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.828341 kubelet[2118]: E0517 00:40:12.828258 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.828477 kubelet[2118]: E0517 00:40:12.828461 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.828477 kubelet[2118]: W0517 00:40:12.828473 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.828542 kubelet[2118]: E0517 00:40:12.828482 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.843484 kubelet[2118]: E0517 00:40:12.843386 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.843484 kubelet[2118]: W0517 00:40:12.843408 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.843484 kubelet[2118]: E0517 00:40:12.843430 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.849687 kubelet[2118]: E0517 00:40:12.849658 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:12.849687 kubelet[2118]: W0517 00:40:12.849678 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:12.849687 kubelet[2118]: E0517 00:40:12.849696 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:12.921657 env[1303]: time="2025-05-17T00:40:12.921572203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:12.921657 env[1303]: time="2025-05-17T00:40:12.921627599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:12.921657 env[1303]: time="2025-05-17T00:40:12.921643610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:12.921912 env[1303]: time="2025-05-17T00:40:12.921869053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b pid=2658 runtime=io.containerd.runc.v2 May 17 00:40:12.954664 env[1303]: time="2025-05-17T00:40:12.954552510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv478,Uid:a867dc72-badd-47bb-bb08-4ef26005812c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\"" May 17 00:40:14.485313 kubelet[2118]: E0517 00:40:14.485243 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:15.370954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843537242.mount: Deactivated successfully. May 17 00:40:16.485818 kubelet[2118]: E0517 00:40:16.485763 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:17.257015 env[1303]: time="2025-05-17T00:40:17.256951288Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:17.349139 env[1303]: time="2025-05-17T00:40:17.349055803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:17.410841 env[1303]: time="2025-05-17T00:40:17.410773143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:17.452045 env[1303]: time="2025-05-17T00:40:17.451967309Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:17.452645 env[1303]: time="2025-05-17T00:40:17.452598058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:40:17.453872 env[1303]: time="2025-05-17T00:40:17.453809919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:40:17.470598 env[1303]: time="2025-05-17T00:40:17.470528269Z" level=info msg="CreateContainer within sandbox \"24c7a513bbcf8258a0f5373f05ba0f0df8826f7979bcdea0722ee2f9a13be7ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:40:18.445972 env[1303]: time="2025-05-17T00:40:18.445886717Z" level=info msg="CreateContainer within sandbox \"24c7a513bbcf8258a0f5373f05ba0f0df8826f7979bcdea0722ee2f9a13be7ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8694af3f74f681b3002080abdee3eed375ed5c9eddfc4359990e7322fdb39613\"" May 17 00:40:18.484646 kubelet[2118]: E0517 00:40:18.484601 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:18.514075 env[1303]: time="2025-05-17T00:40:18.514027329Z" level=info msg="StartContainer for \"8694af3f74f681b3002080abdee3eed375ed5c9eddfc4359990e7322fdb39613\"" May 17 00:40:19.236456 env[1303]: time="2025-05-17T00:40:19.236403390Z" level=info msg="StartContainer for \"8694af3f74f681b3002080abdee3eed375ed5c9eddfc4359990e7322fdb39613\" returns successfully" May 17 00:40:20.243447 kubelet[2118]: E0517 00:40:20.243410 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:20.267262 kubelet[2118]: E0517 00:40:20.267203 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.267262 kubelet[2118]: W0517 00:40:20.267242 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.267453 kubelet[2118]: E0517 00:40:20.267272 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.267530 kubelet[2118]: E0517 00:40:20.267515 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.267530 kubelet[2118]: W0517 00:40:20.267527 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.267581 kubelet[2118]: E0517 00:40:20.267537 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.267685 kubelet[2118]: E0517 00:40:20.267665 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.267685 kubelet[2118]: W0517 00:40:20.267680 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.267685 kubelet[2118]: E0517 00:40:20.267689 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.267847 kubelet[2118]: E0517 00:40:20.267829 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.267847 kubelet[2118]: W0517 00:40:20.267840 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.267847 kubelet[2118]: E0517 00:40:20.267848 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268007 kubelet[2118]: E0517 00:40:20.267985 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268007 kubelet[2118]: W0517 00:40:20.267997 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268007 kubelet[2118]: E0517 00:40:20.268005 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268148 kubelet[2118]: E0517 00:40:20.268125 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268148 kubelet[2118]: W0517 00:40:20.268135 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268148 kubelet[2118]: E0517 00:40:20.268142 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268433 kubelet[2118]: E0517 00:40:20.268255 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268433 kubelet[2118]: W0517 00:40:20.268261 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268433 kubelet[2118]: E0517 00:40:20.268267 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268433 kubelet[2118]: E0517 00:40:20.268365 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268433 kubelet[2118]: W0517 00:40:20.268370 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268433 kubelet[2118]: E0517 00:40:20.268376 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268630 kubelet[2118]: E0517 00:40:20.268477 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268630 kubelet[2118]: W0517 00:40:20.268484 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268630 kubelet[2118]: E0517 00:40:20.268490 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268630 kubelet[2118]: E0517 00:40:20.268586 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268630 kubelet[2118]: W0517 00:40:20.268593 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268630 kubelet[2118]: E0517 00:40:20.268600 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268827 kubelet[2118]: E0517 00:40:20.268710 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268827 kubelet[2118]: W0517 00:40:20.268716 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268827 kubelet[2118]: E0517 00:40:20.268721 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268827 kubelet[2118]: E0517 00:40:20.268815 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268827 kubelet[2118]: W0517 00:40:20.268820 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268827 kubelet[2118]: E0517 00:40:20.268826 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.268996 kubelet[2118]: E0517 00:40:20.268956 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.268996 kubelet[2118]: W0517 00:40:20.268962 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.268996 kubelet[2118]: E0517 00:40:20.268969 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.269082 kubelet[2118]: E0517 00:40:20.269068 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.269082 kubelet[2118]: W0517 00:40:20.269073 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.269082 kubelet[2118]: E0517 00:40:20.269080 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.269219 kubelet[2118]: E0517 00:40:20.269210 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.269219 kubelet[2118]: W0517 00:40:20.269219 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.269288 kubelet[2118]: E0517 00:40:20.269225 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.274717 kubelet[2118]: E0517 00:40:20.274672 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.274717 kubelet[2118]: W0517 00:40:20.274694 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.274717 kubelet[2118]: E0517 00:40:20.274707 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.274956 kubelet[2118]: E0517 00:40:20.274939 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.274956 kubelet[2118]: W0517 00:40:20.274949 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.275011 kubelet[2118]: E0517 00:40:20.274965 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.275239 kubelet[2118]: E0517 00:40:20.275223 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.275239 kubelet[2118]: W0517 00:40:20.275237 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.275338 kubelet[2118]: E0517 00:40:20.275265 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.275514 kubelet[2118]: E0517 00:40:20.275486 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.275514 kubelet[2118]: W0517 00:40:20.275501 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.275514 kubelet[2118]: E0517 00:40:20.275515 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.275779 kubelet[2118]: E0517 00:40:20.275749 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.275779 kubelet[2118]: W0517 00:40:20.275776 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.275878 kubelet[2118]: E0517 00:40:20.275812 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.276039 kubelet[2118]: E0517 00:40:20.276025 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.276039 kubelet[2118]: W0517 00:40:20.276037 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.276088 kubelet[2118]: E0517 00:40:20.276069 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.276250 kubelet[2118]: E0517 00:40:20.276238 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.276250 kubelet[2118]: W0517 00:40:20.276248 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.276311 kubelet[2118]: E0517 00:40:20.276285 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.276492 kubelet[2118]: E0517 00:40:20.276463 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.276492 kubelet[2118]: W0517 00:40:20.276486 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.276596 kubelet[2118]: E0517 00:40:20.276525 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.276723 kubelet[2118]: E0517 00:40:20.276706 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.276723 kubelet[2118]: W0517 00:40:20.276718 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.276814 kubelet[2118]: E0517 00:40:20.276737 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.277069 kubelet[2118]: E0517 00:40:20.277049 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.277069 kubelet[2118]: W0517 00:40:20.277065 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.277196 kubelet[2118]: E0517 00:40:20.277077 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.277313 kubelet[2118]: E0517 00:40:20.277295 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.277313 kubelet[2118]: W0517 00:40:20.277309 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.277405 kubelet[2118]: E0517 00:40:20.277324 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.277536 kubelet[2118]: E0517 00:40:20.277516 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.277536 kubelet[2118]: W0517 00:40:20.277532 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.277622 kubelet[2118]: E0517 00:40:20.277547 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.277750 kubelet[2118]: E0517 00:40:20.277728 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.277750 kubelet[2118]: W0517 00:40:20.277742 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.277852 kubelet[2118]: E0517 00:40:20.277757 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.277996 kubelet[2118]: E0517 00:40:20.277975 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.277996 kubelet[2118]: W0517 00:40:20.277990 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.278087 kubelet[2118]: E0517 00:40:20.278033 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.278281 kubelet[2118]: E0517 00:40:20.278259 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.278281 kubelet[2118]: W0517 00:40:20.278274 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.278372 kubelet[2118]: E0517 00:40:20.278291 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.278505 kubelet[2118]: E0517 00:40:20.278488 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.278505 kubelet[2118]: W0517 00:40:20.278502 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.278556 kubelet[2118]: E0517 00:40:20.278517 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.278780 kubelet[2118]: E0517 00:40:20.278759 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.278780 kubelet[2118]: W0517 00:40:20.278771 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.278836 kubelet[2118]: E0517 00:40:20.278784 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.278977 kubelet[2118]: E0517 00:40:20.278962 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:20.278977 kubelet[2118]: W0517 00:40:20.278974 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:20.279038 kubelet[2118]: E0517 00:40:20.278983 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:20.298218 kubelet[2118]: I0517 00:40:20.298132 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-785744cb6b-tdvng" podStartSLOduration=3.646100292 podStartE2EDuration="8.298093025s" podCreationTimestamp="2025-05-17 00:40:12 +0000 UTC" firstStartedPulling="2025-05-17 00:40:12.801568802 +0000 UTC m=+22.400016594" lastFinishedPulling="2025-05-17 00:40:17.453561515 +0000 UTC m=+27.052009327" observedRunningTime="2025-05-17 00:40:20.29781777 +0000 UTC m=+29.896265572" watchObservedRunningTime="2025-05-17 00:40:20.298093025 +0000 UTC m=+29.896540827" May 17 00:40:20.485346 kubelet[2118]: E0517 00:40:20.485290 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:20.927404 env[1303]: time="2025-05-17T00:40:20.927350106Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:20.977164 env[1303]: time="2025-05-17T00:40:20.977115406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:21.010485 env[1303]: time="2025-05-17T00:40:21.010430889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:21.028096 env[1303]: time="2025-05-17T00:40:21.028036353Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:21.028673 env[1303]: time="2025-05-17T00:40:21.028646828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:40:21.031582 env[1303]: time="2025-05-17T00:40:21.031542058Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:40:21.240752 kubelet[2118]: I0517 00:40:21.240657 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:40:21.241001 kubelet[2118]: E0517 00:40:21.240975 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:21.276730 kubelet[2118]: E0517 00:40:21.276692 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.276730 kubelet[2118]: W0517 00:40:21.276711 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.276730 kubelet[2118]: E0517 00:40:21.276727 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277201 kubelet[2118]: E0517 00:40:21.276919 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277201 kubelet[2118]: W0517 00:40:21.276929 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277201 kubelet[2118]: E0517 00:40:21.276937 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277201 kubelet[2118]: E0517 00:40:21.277092 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277201 kubelet[2118]: W0517 00:40:21.277112 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277201 kubelet[2118]: E0517 00:40:21.277125 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277418 kubelet[2118]: E0517 00:40:21.277285 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277418 kubelet[2118]: W0517 00:40:21.277295 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277418 kubelet[2118]: E0517 00:40:21.277304 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277502 kubelet[2118]: E0517 00:40:21.277443 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277502 kubelet[2118]: W0517 00:40:21.277451 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277502 kubelet[2118]: E0517 00:40:21.277459 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277601 kubelet[2118]: E0517 00:40:21.277586 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277601 kubelet[2118]: W0517 00:40:21.277598 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277661 kubelet[2118]: E0517 00:40:21.277606 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277752 kubelet[2118]: E0517 00:40:21.277729 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277752 kubelet[2118]: W0517 00:40:21.277743 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277752 kubelet[2118]: E0517 00:40:21.277750 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.277894 kubelet[2118]: E0517 00:40:21.277873 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.277894 kubelet[2118]: W0517 00:40:21.277886 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.277894 kubelet[2118]: E0517 00:40:21.277895 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278076 kubelet[2118]: E0517 00:40:21.278053 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278076 kubelet[2118]: W0517 00:40:21.278068 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278076 kubelet[2118]: E0517 00:40:21.278077 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278259 kubelet[2118]: E0517 00:40:21.278244 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278259 kubelet[2118]: W0517 00:40:21.278256 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278326 kubelet[2118]: E0517 00:40:21.278265 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278400 kubelet[2118]: E0517 00:40:21.278387 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278434 kubelet[2118]: W0517 00:40:21.278400 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278434 kubelet[2118]: E0517 00:40:21.278409 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278539 kubelet[2118]: E0517 00:40:21.278526 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278575 kubelet[2118]: W0517 00:40:21.278539 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278575 kubelet[2118]: E0517 00:40:21.278548 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278680 kubelet[2118]: E0517 00:40:21.278666 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278680 kubelet[2118]: W0517 00:40:21.278678 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278737 kubelet[2118]: E0517 00:40:21.278686 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278816 kubelet[2118]: E0517 00:40:21.278803 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278816 kubelet[2118]: W0517 00:40:21.278814 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.278878 kubelet[2118]: E0517 00:40:21.278822 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.278960 kubelet[2118]: E0517 00:40:21.278946 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.278960 kubelet[2118]: W0517 00:40:21.278957 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.279025 kubelet[2118]: E0517 00:40:21.278965 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.283298 kubelet[2118]: E0517 00:40:21.283279 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.283298 kubelet[2118]: W0517 00:40:21.283292 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.283298 kubelet[2118]: E0517 00:40:21.283302 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.283533 kubelet[2118]: E0517 00:40:21.283512 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.283533 kubelet[2118]: W0517 00:40:21.283529 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.283612 kubelet[2118]: E0517 00:40:21.283544 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.283838 kubelet[2118]: E0517 00:40:21.283804 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.283838 kubelet[2118]: W0517 00:40:21.283832 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.284044 kubelet[2118]: E0517 00:40:21.283870 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.284099 kubelet[2118]: E0517 00:40:21.284083 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.284099 kubelet[2118]: W0517 00:40:21.284096 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.284202 kubelet[2118]: E0517 00:40:21.284148 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.284401 kubelet[2118]: E0517 00:40:21.284386 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.284401 kubelet[2118]: W0517 00:40:21.284397 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.284511 kubelet[2118]: E0517 00:40:21.284411 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.284603 kubelet[2118]: E0517 00:40:21.284590 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.284643 kubelet[2118]: W0517 00:40:21.284606 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.284643 kubelet[2118]: E0517 00:40:21.284620 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.284840 kubelet[2118]: E0517 00:40:21.284824 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.284840 kubelet[2118]: W0517 00:40:21.284835 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.284942 kubelet[2118]: E0517 00:40:21.284868 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.285025 kubelet[2118]: E0517 00:40:21.285012 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.285025 kubelet[2118]: W0517 00:40:21.285022 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.285099 kubelet[2118]: E0517 00:40:21.285052 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.285239 kubelet[2118]: E0517 00:40:21.285216 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.285239 kubelet[2118]: W0517 00:40:21.285233 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.285311 kubelet[2118]: E0517 00:40:21.285247 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.285454 kubelet[2118]: E0517 00:40:21.285438 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.285454 kubelet[2118]: W0517 00:40:21.285452 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.285545 kubelet[2118]: E0517 00:40:21.285476 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.285679 kubelet[2118]: E0517 00:40:21.285666 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.285679 kubelet[2118]: W0517 00:40:21.285678 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.285755 kubelet[2118]: E0517 00:40:21.285695 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.285919 kubelet[2118]: E0517 00:40:21.285904 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.285919 kubelet[2118]: W0517 00:40:21.285918 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.286012 kubelet[2118]: E0517 00:40:21.285932 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.286149 kubelet[2118]: E0517 00:40:21.286134 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.286203 kubelet[2118]: W0517 00:40:21.286147 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.286203 kubelet[2118]: E0517 00:40:21.286170 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.286372 kubelet[2118]: E0517 00:40:21.286358 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.286372 kubelet[2118]: W0517 00:40:21.286369 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.286453 kubelet[2118]: E0517 00:40:21.286384 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.286579 kubelet[2118]: E0517 00:40:21.286566 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.286579 kubelet[2118]: W0517 00:40:21.286577 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.286655 kubelet[2118]: E0517 00:40:21.286600 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.286758 kubelet[2118]: E0517 00:40:21.286746 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.286805 kubelet[2118]: W0517 00:40:21.286758 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.286805 kubelet[2118]: E0517 00:40:21.286781 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.286948 kubelet[2118]: E0517 00:40:21.286932 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.286948 kubelet[2118]: W0517 00:40:21.286943 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.287028 kubelet[2118]: E0517 00:40:21.286966 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.287148 kubelet[2118]: E0517 00:40:21.287137 2118 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:40:21.287148 kubelet[2118]: W0517 00:40:21.287147 2118 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:40:21.287213 kubelet[2118]: E0517 00:40:21.287156 2118 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:40:21.363910 env[1303]: time="2025-05-17T00:40:21.363830582Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2\"" May 17 00:40:21.364455 env[1303]: time="2025-05-17T00:40:21.364407743Z" level=info msg="StartContainer for \"f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2\"" May 17 00:40:21.564243 env[1303]: time="2025-05-17T00:40:21.564029959Z" level=info msg="StartContainer for \"f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2\" returns successfully" May 17 00:40:21.706156 env[1303]: time="2025-05-17T00:40:21.706077167Z" level=info msg="shim disconnected" id=f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2 May 17 00:40:21.706156 env[1303]: time="2025-05-17T00:40:21.706160007Z" level=warning msg="cleaning up after shim disconnected" id=f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2 namespace=k8s.io May 17 00:40:21.706400 env[1303]: time="2025-05-17T00:40:21.706171320Z" level=info msg="cleaning up dead shim" May 17 00:40:21.712667 env[1303]: time="2025-05-17T00:40:21.712613173Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2849 runtime=io.containerd.runc.v2\n" May 17 00:40:22.244387 env[1303]: time="2025-05-17T00:40:22.244332395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:40:22.268346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97866e45419e3e409a320ff4b6ab47481b54e08d4091ece980c6acc73fd51e2-rootfs.mount: Deactivated successfully. May 17 00:40:22.486422 kubelet[2118]: E0517 00:40:22.486327 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:24.485221 kubelet[2118]: E0517 00:40:24.485162 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:26.485188 kubelet[2118]: E0517 00:40:26.485137 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:27.650019 kubelet[2118]: I0517 00:40:27.649978 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:40:27.650644 kubelet[2118]: E0517 00:40:27.650274 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:27.681807 kernel: kauditd_printk_skb: 25 callbacks suppressed May 17 00:40:27.681928 kernel: audit: type=1325 audit(1747442427.677:291): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:27.677000 audit[2872]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:27.689675 kernel: audit: type=1300 audit(1747442427.677:291): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb0449a80 a2=0 a3=7ffcb0449a6c items=0 ppid=2242 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:27.689737 kernel: audit: type=1327 audit(1747442427.677:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:27.677000 audit[2872]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb0449a80 a2=0 a3=7ffcb0449a6c items=0 ppid=2242 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:27.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:27.689000 audit[2872]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:27.701123 kernel: audit: type=1325 audit(1747442427.689:292): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:27.701178 kernel: audit: type=1300 audit(1747442427.689:292): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb0449a80 a2=0 a3=7ffcb0449a6c items=0 ppid=2242 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:27.701219 kernel: audit: type=1327 audit(1747442427.689:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:27.689000 audit[2872]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb0449a80 a2=0 a3=7ffcb0449a6c items=0 ppid=2242 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:27.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:28.197584 env[1303]: time="2025-05-17T00:40:28.197506257Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:28.199725 env[1303]: time="2025-05-17T00:40:28.199663450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:28.201320 env[1303]: time="2025-05-17T00:40:28.201261154Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:28.202791 env[1303]: time="2025-05-17T00:40:28.202734858Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:28.203154 env[1303]: time="2025-05-17T00:40:28.203127475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:40:28.205607 env[1303]: time="2025-05-17T00:40:28.205570771Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:40:28.219348 env[1303]: time="2025-05-17T00:40:28.219292262Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62\"" May 17 00:40:28.219789 env[1303]: time="2025-05-17T00:40:28.219755496Z" level=info msg="StartContainer for \"b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62\"" May 17 00:40:28.255307 kubelet[2118]: E0517 00:40:28.255232 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:28.269803 env[1303]: time="2025-05-17T00:40:28.268952059Z" level=info msg="StartContainer for \"b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62\" returns successfully" May 17 00:40:28.484986 kubelet[2118]: E0517 00:40:28.484598 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:30.485369 kubelet[2118]: E0517 00:40:30.485312 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:30.617675 env[1303]: time="2025-05-17T00:40:30.617615109Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:40:30.635601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62-rootfs.mount: Deactivated successfully. May 17 00:40:30.642970 env[1303]: time="2025-05-17T00:40:30.642917750Z" level=info msg="shim disconnected" id=b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62 May 17 00:40:30.643134 env[1303]: time="2025-05-17T00:40:30.642980400Z" level=warning msg="cleaning up after shim disconnected" id=b4beaa61edb1cda97b34d1a9d3b10bff5eeb3e1e3ccfe1deeb39361dffcb7e62 namespace=k8s.io May 17 00:40:30.643134 env[1303]: time="2025-05-17T00:40:30.642996642Z" level=info msg="cleaning up dead shim" May 17 00:40:30.649450 env[1303]: time="2025-05-17T00:40:30.649423576Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2923 runtime=io.containerd.runc.v2\n" May 17 00:40:30.673909 kubelet[2118]: I0517 00:40:30.673877 2118 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:40:30.764046 kubelet[2118]: I0517 00:40:30.763918 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7p6b\" (UniqueName: \"kubernetes.io/projected/1a553309-56ea-422b-ad2d-4882053f4a1c-kube-api-access-s7p6b\") pod \"coredns-7c65d6cfc9-qd4gz\" (UID: \"1a553309-56ea-422b-ad2d-4882053f4a1c\") " pod="kube-system/coredns-7c65d6cfc9-qd4gz" May 17 00:40:30.764046 kubelet[2118]: I0517 00:40:30.764020 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klt5v\" (UniqueName: \"kubernetes.io/projected/11364172-e58d-4414-85d0-e2ab3fb5d624-kube-api-access-klt5v\") pod \"calico-apiserver-5d67ffffc-lwqn5\" (UID: \"11364172-e58d-4414-85d0-e2ab3fb5d624\") " pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" May 17 00:40:30.764252 kubelet[2118]: I0517 00:40:30.764097 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxh5\" (UniqueName: \"kubernetes.io/projected/b53473ef-a278-4a28-a57c-3f8a53828a27-kube-api-access-wlxh5\") pod \"calico-kube-controllers-ddd9bb8f8-zxssx\" (UID: \"b53473ef-a278-4a28-a57c-3f8a53828a27\") " pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" May 17 00:40:30.764252 kubelet[2118]: I0517 00:40:30.764195 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls964\" (UniqueName: \"kubernetes.io/projected/272cb491-0dd7-4c17-9835-83fe9d59eb06-kube-api-access-ls964\") pod \"calico-apiserver-5d67ffffc-hkhhx\" (UID: \"272cb491-0dd7-4c17-9835-83fe9d59eb06\") " pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" May 17 00:40:30.764307 kubelet[2118]: I0517 00:40:30.764275 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a553309-56ea-422b-ad2d-4882053f4a1c-config-volume\") pod \"coredns-7c65d6cfc9-qd4gz\" (UID: \"1a553309-56ea-422b-ad2d-4882053f4a1c\") " pod="kube-system/coredns-7c65d6cfc9-qd4gz" May 17 00:40:30.764361 kubelet[2118]: I0517 00:40:30.764345 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b53473ef-a278-4a28-a57c-3f8a53828a27-tigera-ca-bundle\") pod \"calico-kube-controllers-ddd9bb8f8-zxssx\" (UID: \"b53473ef-a278-4a28-a57c-3f8a53828a27\") " pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" May 17 00:40:30.764434 kubelet[2118]: I0517 00:40:30.764415 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/272cb491-0dd7-4c17-9835-83fe9d59eb06-calico-apiserver-certs\") pod \"calico-apiserver-5d67ffffc-hkhhx\" (UID: \"272cb491-0dd7-4c17-9835-83fe9d59eb06\") " pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" May 17 00:40:30.764515 kubelet[2118]: I0517 00:40:30.764485 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11364172-e58d-4414-85d0-e2ab3fb5d624-calico-apiserver-certs\") pod \"calico-apiserver-5d67ffffc-lwqn5\" (UID: \"11364172-e58d-4414-85d0-e2ab3fb5d624\") " pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" May 17 00:40:30.764596 kubelet[2118]: I0517 00:40:30.764535 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4c9eee5c-cc38-4032-8b07-e8d97094a990-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-6mszs\" (UID: \"4c9eee5c-cc38-4032-8b07-e8d97094a990\") " pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:30.764701 kubelet[2118]: I0517 00:40:30.764680 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrpmv\" (UniqueName: \"kubernetes.io/projected/e17ba7d8-0fd1-43ad-968d-26d53c711122-kube-api-access-qrpmv\") pod \"coredns-7c65d6cfc9-2z6nm\" (UID: \"e17ba7d8-0fd1-43ad-968d-26d53c711122\") " pod="kube-system/coredns-7c65d6cfc9-2z6nm" May 17 00:40:30.768424 kubelet[2118]: I0517 00:40:30.768369 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1418ed5-60dc-4d92-83de-664495988962-whisker-backend-key-pair\") pod \"whisker-67d7f797b5-wggl9\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " pod="calico-system/whisker-67d7f797b5-wggl9" May 17 00:40:30.768424 kubelet[2118]: I0517 00:40:30.768430 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58c33d02-5e7f-4996-a72a-2b2ef65a5742-calico-apiserver-certs\") pod \"calico-apiserver-7dfbcfbd6f-78x8j\" (UID: \"58c33d02-5e7f-4996-a72a-2b2ef65a5742\") " pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" May 17 00:40:30.768639 kubelet[2118]: I0517 00:40:30.768459 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c9eee5c-cc38-4032-8b07-e8d97094a990-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-6mszs\" (UID: \"4c9eee5c-cc38-4032-8b07-e8d97094a990\") " pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:30.768639 kubelet[2118]: I0517 00:40:30.768475 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb95b\" (UniqueName: \"kubernetes.io/projected/4c9eee5c-cc38-4032-8b07-e8d97094a990-kube-api-access-fb95b\") pod \"goldmane-8f77d7b6c-6mszs\" (UID: \"4c9eee5c-cc38-4032-8b07-e8d97094a990\") " pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:30.768639 kubelet[2118]: I0517 00:40:30.768492 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17ba7d8-0fd1-43ad-968d-26d53c711122-config-volume\") pod \"coredns-7c65d6cfc9-2z6nm\" (UID: \"e17ba7d8-0fd1-43ad-968d-26d53c711122\") " pod="kube-system/coredns-7c65d6cfc9-2z6nm" May 17 00:40:30.768639 kubelet[2118]: I0517 00:40:30.768523 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1418ed5-60dc-4d92-83de-664495988962-whisker-ca-bundle\") pod \"whisker-67d7f797b5-wggl9\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " pod="calico-system/whisker-67d7f797b5-wggl9" May 17 00:40:30.768639 kubelet[2118]: I0517 00:40:30.768543 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9eee5c-cc38-4032-8b07-e8d97094a990-config\") pod \"goldmane-8f77d7b6c-6mszs\" (UID: \"4c9eee5c-cc38-4032-8b07-e8d97094a990\") " pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:30.768768 kubelet[2118]: I0517 00:40:30.768562 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7p5\" (UniqueName: \"kubernetes.io/projected/58c33d02-5e7f-4996-a72a-2b2ef65a5742-kube-api-access-bw7p5\") pod \"calico-apiserver-7dfbcfbd6f-78x8j\" (UID: \"58c33d02-5e7f-4996-a72a-2b2ef65a5742\") " pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" May 17 00:40:30.768768 kubelet[2118]: I0517 00:40:30.768582 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nms\" (UniqueName: \"kubernetes.io/projected/c1418ed5-60dc-4d92-83de-664495988962-kube-api-access-45nms\") pod \"whisker-67d7f797b5-wggl9\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " pod="calico-system/whisker-67d7f797b5-wggl9" May 17 00:40:31.002494 kubelet[2118]: E0517 00:40:31.002460 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:31.003074 env[1303]: time="2025-05-17T00:40:31.003021584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd4gz,Uid:1a553309-56ea-422b-ad2d-4882053f4a1c,Namespace:kube-system,Attempt:0,}" May 17 00:40:31.012157 kubelet[2118]: E0517 00:40:31.011921 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:31.012285 env[1303]: time="2025-05-17T00:40:31.011939277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfbcfbd6f-78x8j,Uid:58c33d02-5e7f-4996-a72a-2b2ef65a5742,Namespace:calico-apiserver,Attempt:0,}" May 17 00:40:31.012618 env[1303]: time="2025-05-17T00:40:31.012537589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2z6nm,Uid:e17ba7d8-0fd1-43ad-968d-26d53c711122,Namespace:kube-system,Attempt:0,}" May 17 00:40:31.021416 env[1303]: time="2025-05-17T00:40:31.021305966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-lwqn5,Uid:11364172-e58d-4414-85d0-e2ab3fb5d624,Namespace:calico-apiserver,Attempt:0,}" May 17 00:40:31.022933 env[1303]: time="2025-05-17T00:40:31.022890207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd9bb8f8-zxssx,Uid:b53473ef-a278-4a28-a57c-3f8a53828a27,Namespace:calico-system,Attempt:0,}" May 17 00:40:31.025524 env[1303]: time="2025-05-17T00:40:31.025486126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-hkhhx,Uid:272cb491-0dd7-4c17-9835-83fe9d59eb06,Namespace:calico-apiserver,Attempt:0,}" May 17 00:40:31.025745 env[1303]: time="2025-05-17T00:40:31.025708644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-6mszs,Uid:4c9eee5c-cc38-4032-8b07-e8d97094a990,Namespace:calico-system,Attempt:0,}" May 17 00:40:31.028754 env[1303]: time="2025-05-17T00:40:31.028710054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d7f797b5-wggl9,Uid:c1418ed5-60dc-4d92-83de-664495988962,Namespace:calico-system,Attempt:0,}" May 17 00:40:31.087638 env[1303]: time="2025-05-17T00:40:31.087550658Z" level=error msg="Failed to destroy network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.088050 env[1303]: time="2025-05-17T00:40:31.088011957Z" level=error msg="encountered an error cleaning up failed sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.088132 env[1303]: time="2025-05-17T00:40:31.088073295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd4gz,Uid:1a553309-56ea-422b-ad2d-4882053f4a1c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.088389 kubelet[2118]: E0517 00:40:31.088333 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.088587 kubelet[2118]: E0517 00:40:31.088422 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qd4gz" May 17 00:40:31.088587 kubelet[2118]: E0517 00:40:31.088450 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qd4gz" May 17 00:40:31.088587 kubelet[2118]: E0517 00:40:31.088515 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qd4gz_kube-system(1a553309-56ea-422b-ad2d-4882053f4a1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qd4gz_kube-system(1a553309-56ea-422b-ad2d-4882053f4a1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qd4gz" podUID="1a553309-56ea-422b-ad2d-4882053f4a1c" May 17 00:40:31.144574 env[1303]: time="2025-05-17T00:40:31.144502526Z" level=error msg="Failed to destroy network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.145154 env[1303]: time="2025-05-17T00:40:31.145120566Z" level=error msg="encountered an error cleaning up failed sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.145366 env[1303]: time="2025-05-17T00:40:31.145300292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2z6nm,Uid:e17ba7d8-0fd1-43ad-968d-26d53c711122,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.145981 kubelet[2118]: E0517 00:40:31.145831 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.145981 kubelet[2118]: E0517 00:40:31.145934 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2z6nm" May 17 00:40:31.146554 kubelet[2118]: E0517 00:40:31.146177 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2z6nm" May 17 00:40:31.146554 kubelet[2118]: E0517 00:40:31.146246 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2z6nm_kube-system(e17ba7d8-0fd1-43ad-968d-26d53c711122)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2z6nm_kube-system(e17ba7d8-0fd1-43ad-968d-26d53c711122)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2z6nm" podUID="e17ba7d8-0fd1-43ad-968d-26d53c711122" May 17 00:40:31.189768 env[1303]: time="2025-05-17T00:40:31.189699715Z" level=error msg="Failed to destroy network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.190375 env[1303]: time="2025-05-17T00:40:31.190339207Z" level=error msg="encountered an error cleaning up failed sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.190530 env[1303]: time="2025-05-17T00:40:31.190483514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd9bb8f8-zxssx,Uid:b53473ef-a278-4a28-a57c-3f8a53828a27,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.190916 kubelet[2118]: E0517 00:40:31.190873 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.190995 kubelet[2118]: E0517 00:40:31.190936 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" May 17 00:40:31.190995 kubelet[2118]: E0517 00:40:31.190955 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" May 17 00:40:31.191080 kubelet[2118]: E0517 00:40:31.190997 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddd9bb8f8-zxssx_calico-system(b53473ef-a278-4a28-a57c-3f8a53828a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddd9bb8f8-zxssx_calico-system(b53473ef-a278-4a28-a57c-3f8a53828a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" podUID="b53473ef-a278-4a28-a57c-3f8a53828a27" May 17 00:40:31.199765 env[1303]: time="2025-05-17T00:40:31.199708340Z" level=error msg="Failed to destroy network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.200036 env[1303]: time="2025-05-17T00:40:31.200011072Z" level=error msg="encountered an error cleaning up failed sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.200098 env[1303]: time="2025-05-17T00:40:31.200051661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-lwqn5,Uid:11364172-e58d-4414-85d0-e2ab3fb5d624,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.200282 kubelet[2118]: E0517 00:40:31.200240 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.200351 kubelet[2118]: E0517 00:40:31.200294 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" May 17 00:40:31.200351 kubelet[2118]: E0517 00:40:31.200321 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" May 17 00:40:31.200415 kubelet[2118]: E0517 00:40:31.200357 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d67ffffc-lwqn5_calico-apiserver(11364172-e58d-4414-85d0-e2ab3fb5d624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d67ffffc-lwqn5_calico-apiserver(11364172-e58d-4414-85d0-e2ab3fb5d624)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" podUID="11364172-e58d-4414-85d0-e2ab3fb5d624" May 17 00:40:31.209446 env[1303]: time="2025-05-17T00:40:31.209380466Z" level=error msg="Failed to destroy network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.209788 env[1303]: time="2025-05-17T00:40:31.209754416Z" level=error msg="encountered an error cleaning up failed sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.209847 env[1303]: time="2025-05-17T00:40:31.209811476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfbcfbd6f-78x8j,Uid:58c33d02-5e7f-4996-a72a-2b2ef65a5742,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.210072 kubelet[2118]: E0517 00:40:31.210040 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.210162 kubelet[2118]: E0517 00:40:31.210099 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" May 17 00:40:31.210162 kubelet[2118]: E0517 00:40:31.210133 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" May 17 00:40:31.210246 kubelet[2118]: E0517 00:40:31.210171 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dfbcfbd6f-78x8j_calico-apiserver(58c33d02-5e7f-4996-a72a-2b2ef65a5742)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dfbcfbd6f-78x8j_calico-apiserver(58c33d02-5e7f-4996-a72a-2b2ef65a5742)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" podUID="58c33d02-5e7f-4996-a72a-2b2ef65a5742" May 17 00:40:31.219595 env[1303]: time="2025-05-17T00:40:31.219531534Z" level=error msg="Failed to destroy network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.220198 env[1303]: time="2025-05-17T00:40:31.220166277Z" level=error msg="encountered an error cleaning up failed sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.220337 env[1303]: time="2025-05-17T00:40:31.220300155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-hkhhx,Uid:272cb491-0dd7-4c17-9835-83fe9d59eb06,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.220799 kubelet[2118]: E0517 00:40:31.220741 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.220903 kubelet[2118]: E0517 00:40:31.220796 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" May 17 00:40:31.220903 kubelet[2118]: E0517 00:40:31.220819 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" May 17 00:40:31.220903 kubelet[2118]: E0517 00:40:31.220861 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d67ffffc-hkhhx_calico-apiserver(272cb491-0dd7-4c17-9835-83fe9d59eb06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d67ffffc-hkhhx_calico-apiserver(272cb491-0dd7-4c17-9835-83fe9d59eb06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" podUID="272cb491-0dd7-4c17-9835-83fe9d59eb06" May 17 00:40:31.228668 env[1303]: time="2025-05-17T00:40:31.228611842Z" level=error msg="Failed to destroy network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.229010 env[1303]: time="2025-05-17T00:40:31.228971765Z" level=error msg="encountered an error cleaning up failed sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.229089 env[1303]: time="2025-05-17T00:40:31.229039775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d7f797b5-wggl9,Uid:c1418ed5-60dc-4d92-83de-664495988962,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.229343 kubelet[2118]: E0517 00:40:31.229285 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.229343 kubelet[2118]: E0517 00:40:31.229346 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d7f797b5-wggl9" May 17 00:40:31.229562 kubelet[2118]: E0517 00:40:31.229373 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d7f797b5-wggl9" May 17 00:40:31.229562 kubelet[2118]: E0517 00:40:31.229420 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67d7f797b5-wggl9_calico-system(c1418ed5-60dc-4d92-83de-664495988962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67d7f797b5-wggl9_calico-system(c1418ed5-60dc-4d92-83de-664495988962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d7f797b5-wggl9" podUID="c1418ed5-60dc-4d92-83de-664495988962" May 17 00:40:31.231962 env[1303]: time="2025-05-17T00:40:31.231904533Z" level=error msg="Failed to destroy network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.232377 env[1303]: time="2025-05-17T00:40:31.232339039Z" level=error msg="encountered an error cleaning up failed sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.232470 env[1303]: time="2025-05-17T00:40:31.232387663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-6mszs,Uid:4c9eee5c-cc38-4032-8b07-e8d97094a990,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.232650 kubelet[2118]: E0517 00:40:31.232621 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.232714 kubelet[2118]: E0517 00:40:31.232664 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:31.232714 kubelet[2118]: E0517 00:40:31.232681 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-6mszs" May 17 00:40:31.232770 kubelet[2118]: E0517 00:40:31.232716 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-6mszs_calico-system(4c9eee5c-cc38-4032-8b07-e8d97094a990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-6mszs_calico-system(4c9eee5c-cc38-4032-8b07-e8d97094a990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:40:31.262733 kubelet[2118]: I0517 00:40:31.262692 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:31.263336 env[1303]: time="2025-05-17T00:40:31.263281919Z" level=info msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" May 17 00:40:31.264276 kubelet[2118]: I0517 00:40:31.263902 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:31.264545 env[1303]: time="2025-05-17T00:40:31.264503100Z" level=info msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" May 17 00:40:31.265743 kubelet[2118]: I0517 00:40:31.265484 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:31.265898 env[1303]: time="2025-05-17T00:40:31.265857809Z" level=info msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" May 17 00:40:31.268287 kubelet[2118]: I0517 00:40:31.268265 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:31.268795 env[1303]: time="2025-05-17T00:40:31.268763004Z" level=info msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" May 17 00:40:31.269985 kubelet[2118]: I0517 00:40:31.269962 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:31.270463 env[1303]: time="2025-05-17T00:40:31.270426186Z" level=info msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" May 17 00:40:31.274852 kubelet[2118]: I0517 00:40:31.271345 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:31.274922 env[1303]: time="2025-05-17T00:40:31.273503552Z" level=info msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" May 17 00:40:31.280551 env[1303]: time="2025-05-17T00:40:31.279402943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:40:31.280662 kubelet[2118]: I0517 00:40:31.279479 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:31.280909 env[1303]: time="2025-05-17T00:40:31.280871662Z" level=info msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" May 17 00:40:31.282632 kubelet[2118]: I0517 00:40:31.282329 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:31.283068 env[1303]: time="2025-05-17T00:40:31.282868508Z" level=info msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" May 17 00:40:31.302489 env[1303]: time="2025-05-17T00:40:31.302428105Z" level=error msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" failed" error="failed to destroy network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.302732 kubelet[2118]: E0517 00:40:31.302687 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:31.302805 kubelet[2118]: E0517 00:40:31.302756 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076"} May 17 00:40:31.302855 kubelet[2118]: E0517 00:40:31.302826 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b53473ef-a278-4a28-a57c-3f8a53828a27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.302949 kubelet[2118]: E0517 00:40:31.302861 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b53473ef-a278-4a28-a57c-3f8a53828a27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" podUID="b53473ef-a278-4a28-a57c-3f8a53828a27" May 17 00:40:31.325353 env[1303]: time="2025-05-17T00:40:31.325291114Z" level=error msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" failed" error="failed to destroy network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.325889 kubelet[2118]: E0517 00:40:31.325850 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:31.325952 kubelet[2118]: E0517 00:40:31.325903 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1"} May 17 00:40:31.325952 kubelet[2118]: E0517 00:40:31.325935 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11364172-e58d-4414-85d0-e2ab3fb5d624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.326038 kubelet[2118]: E0517 00:40:31.325955 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11364172-e58d-4414-85d0-e2ab3fb5d624\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" podUID="11364172-e58d-4414-85d0-e2ab3fb5d624" May 17 00:40:31.335138 env[1303]: time="2025-05-17T00:40:31.335052162Z" level=error msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" failed" error="failed to destroy network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.335575 kubelet[2118]: E0517 00:40:31.335527 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:31.335638 kubelet[2118]: E0517 00:40:31.335579 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db"} May 17 00:40:31.335638 kubelet[2118]: E0517 00:40:31.335610 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c9eee5c-cc38-4032-8b07-e8d97094a990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.335724 kubelet[2118]: E0517 00:40:31.335633 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c9eee5c-cc38-4032-8b07-e8d97094a990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:40:31.336805 env[1303]: time="2025-05-17T00:40:31.336744912Z" level=error msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" failed" error="failed to destroy network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.336956 kubelet[2118]: E0517 00:40:31.336929 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:31.337002 kubelet[2118]: E0517 00:40:31.336957 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47"} May 17 00:40:31.337002 kubelet[2118]: E0517 00:40:31.336979 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a553309-56ea-422b-ad2d-4882053f4a1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.337002 kubelet[2118]: E0517 00:40:31.336994 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a553309-56ea-422b-ad2d-4882053f4a1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qd4gz" podUID="1a553309-56ea-422b-ad2d-4882053f4a1c" May 17 00:40:31.338180 env[1303]: time="2025-05-17T00:40:31.338144998Z" level=error msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" failed" error="failed to destroy network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.338283 kubelet[2118]: E0517 00:40:31.338260 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:31.338328 kubelet[2118]: E0517 00:40:31.338282 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2"} May 17 00:40:31.338328 kubelet[2118]: E0517 00:40:31.338301 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58c33d02-5e7f-4996-a72a-2b2ef65a5742\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.338328 kubelet[2118]: E0517 00:40:31.338315 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58c33d02-5e7f-4996-a72a-2b2ef65a5742\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" podUID="58c33d02-5e7f-4996-a72a-2b2ef65a5742" May 17 00:40:31.351055 env[1303]: time="2025-05-17T00:40:31.350999253Z" level=error msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" failed" error="failed to destroy network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.351364 kubelet[2118]: E0517 00:40:31.351319 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:31.351422 kubelet[2118]: E0517 00:40:31.351394 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e"} May 17 00:40:31.351451 kubelet[2118]: E0517 00:40:31.351430 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"272cb491-0dd7-4c17-9835-83fe9d59eb06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.351532 kubelet[2118]: E0517 00:40:31.351461 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"272cb491-0dd7-4c17-9835-83fe9d59eb06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" podUID="272cb491-0dd7-4c17-9835-83fe9d59eb06" May 17 00:40:31.352216 env[1303]: time="2025-05-17T00:40:31.352165769Z" level=error msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" failed" error="failed to destroy network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.352338 kubelet[2118]: E0517 00:40:31.352318 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:31.352338 kubelet[2118]: E0517 00:40:31.352338 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7"} May 17 00:40:31.352415 kubelet[2118]: E0517 00:40:31.352355 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e17ba7d8-0fd1-43ad-968d-26d53c711122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.352415 kubelet[2118]: E0517 00:40:31.352371 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e17ba7d8-0fd1-43ad-968d-26d53c711122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2z6nm" podUID="e17ba7d8-0fd1-43ad-968d-26d53c711122" May 17 00:40:31.352802 env[1303]: time="2025-05-17T00:40:31.352772256Z" level=error msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" failed" error="failed to destroy network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:31.352942 kubelet[2118]: E0517 00:40:31.352900 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:31.352972 kubelet[2118]: E0517 00:40:31.352956 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7"} May 17 00:40:31.353010 kubelet[2118]: E0517 00:40:31.352994 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1418ed5-60dc-4d92-83de-664495988962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:31.353065 kubelet[2118]: E0517 00:40:31.353021 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1418ed5-60dc-4d92-83de-664495988962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d7f797b5-wggl9" podUID="c1418ed5-60dc-4d92-83de-664495988962" May 17 00:40:32.487720 env[1303]: time="2025-05-17T00:40:32.487675465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t72dz,Uid:df42f368-756b-4bd3-8365-0200df6a0484,Namespace:calico-system,Attempt:0,}" May 17 00:40:32.874939 env[1303]: time="2025-05-17T00:40:32.874860803Z" level=error msg="Failed to destroy network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:32.877529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b-shm.mount: Deactivated successfully. May 17 00:40:32.878038 env[1303]: time="2025-05-17T00:40:32.877973143Z" level=error msg="encountered an error cleaning up failed sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:32.878098 env[1303]: time="2025-05-17T00:40:32.878037747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t72dz,Uid:df42f368-756b-4bd3-8365-0200df6a0484,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:32.878339 kubelet[2118]: E0517 00:40:32.878287 2118 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:32.878723 kubelet[2118]: E0517 00:40:32.878349 2118 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t72dz" May 17 00:40:32.878723 kubelet[2118]: E0517 00:40:32.878377 2118 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t72dz" May 17 00:40:32.878723 kubelet[2118]: E0517 00:40:32.878424 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t72dz_calico-system(df42f368-756b-4bd3-8365-0200df6a0484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t72dz_calico-system(df42f368-756b-4bd3-8365-0200df6a0484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:33.286806 kubelet[2118]: I0517 00:40:33.286691 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:33.287276 env[1303]: time="2025-05-17T00:40:33.287245312Z" level=info msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" May 17 00:40:33.311743 env[1303]: time="2025-05-17T00:40:33.311684334Z" level=error msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" failed" error="failed to destroy network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:40:33.312043 kubelet[2118]: E0517 00:40:33.311965 2118 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:33.312127 kubelet[2118]: E0517 00:40:33.312050 2118 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b"} May 17 00:40:33.312127 kubelet[2118]: E0517 00:40:33.312081 2118 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df42f368-756b-4bd3-8365-0200df6a0484\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:40:33.312127 kubelet[2118]: E0517 00:40:33.312113 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df42f368-756b-4bd3-8365-0200df6a0484\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t72dz" podUID="df42f368-756b-4bd3-8365-0200df6a0484" May 17 00:40:39.517848 systemd[1]: Started sshd@7-10.0.0.136:22-10.0.0.1:37242.service. May 17 00:40:39.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:37242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.524211 kernel: audit: type=1130 audit(1747442439.516:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:37242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.558000 audit[3422]: USER_ACCT pid=3422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.564059 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 37242 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:39.564372 kernel: audit: type=1101 audit(1747442439.558:294): pid=3422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.564433 kernel: audit: type=1103 audit(1747442439.563:295): pid=3422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.563000 audit[3422]: CRED_ACQ pid=3422 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.566718 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:39.573048 kernel: audit: type=1006 audit(1747442439.563:296): pid=3422 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 17 00:40:39.573260 kernel: audit: type=1300 audit(1747442439.563:296): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff75c634a0 a2=3 a3=0 items=0 ppid=1 pid=3422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:39.563000 audit[3422]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff75c634a0 a2=3 a3=0 items=0 ppid=1 pid=3422 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:39.572683 systemd[1]: Started session-8.scope. May 17 00:40:39.574801 systemd-logind[1293]: New session 8 of user core. May 17 00:40:39.563000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:40:39.578000 audit[3422]: USER_START pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.585554 kernel: audit: type=1327 audit(1747442439.563:296): proctitle=737368643A20636F7265205B707269765D May 17 00:40:39.585632 kernel: audit: type=1105 audit(1747442439.578:297): pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.585680 kernel: audit: type=1103 audit(1747442439.579:298): pid=3425 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.579000 audit[3425]: CRED_ACQ pid=3425 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.757143 sshd[3422]: pam_unix(sshd:session): session closed for user core May 17 00:40:39.756000 audit[3422]: USER_END pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.759797 systemd[1]: sshd@7-10.0.0.136:22-10.0.0.1:37242.service: Deactivated successfully. May 17 00:40:39.760879 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:40:39.761602 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. May 17 00:40:39.762555 systemd-logind[1293]: Removed session 8. May 17 00:40:39.757000 audit[3422]: CRED_DISP pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.766456 kernel: audit: type=1106 audit(1747442439.756:299): pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.766584 kernel: audit: type=1104 audit(1747442439.757:300): pid=3422 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:39.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.136:22-10.0.0.1:37242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:39.853011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817489988.mount: Deactivated successfully. May 17 00:40:41.632054 env[1303]: time="2025-05-17T00:40:41.631942315Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:41.637076 env[1303]: time="2025-05-17T00:40:41.636956786Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:41.650375 env[1303]: time="2025-05-17T00:40:41.650265387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:41.670397 env[1303]: time="2025-05-17T00:40:41.670342798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:41.670925 env[1303]: time="2025-05-17T00:40:41.670894835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:40:41.709249 env[1303]: time="2025-05-17T00:40:41.709201768Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:40:41.779740 env[1303]: time="2025-05-17T00:40:41.779676902Z" level=info msg="CreateContainer within sandbox \"7c2240477ebc4cf123693468a442689f15830990f41af7e5fa62b7f9b0fbe42b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91d8b53621d638b132841871a76978e3b01079f5e8ec8863fd5caa72f6bc7b6a\"" May 17 00:40:41.780824 env[1303]: time="2025-05-17T00:40:41.780754455Z" level=info msg="StartContainer for \"91d8b53621d638b132841871a76978e3b01079f5e8ec8863fd5caa72f6bc7b6a\"" May 17 00:40:41.982241 env[1303]: time="2025-05-17T00:40:41.981921115Z" level=info msg="StartContainer for \"91d8b53621d638b132841871a76978e3b01079f5e8ec8863fd5caa72f6bc7b6a\" returns successfully" May 17 00:40:42.198371 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:40:42.198546 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:40:42.489339 env[1303]: time="2025-05-17T00:40:42.487645338Z" level=info msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" May 17 00:40:42.489339 env[1303]: time="2025-05-17T00:40:42.487691126Z" level=info msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" May 17 00:40:42.489339 env[1303]: time="2025-05-17T00:40:42.488004986Z" level=info msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" May 17 00:40:42.583645 kubelet[2118]: I0517 00:40:42.577936 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gv478" podStartSLOduration=1.859179527 podStartE2EDuration="30.577910508s" podCreationTimestamp="2025-05-17 00:40:12 +0000 UTC" firstStartedPulling="2025-05-17 00:40:12.955752966 +0000 UTC m=+22.554200768" lastFinishedPulling="2025-05-17 00:40:41.674483947 +0000 UTC m=+51.272931749" observedRunningTime="2025-05-17 00:40:42.364825999 +0000 UTC m=+51.963273831" watchObservedRunningTime="2025-05-17 00:40:42.577910508 +0000 UTC m=+52.176358310" May 17 00:40:42.584158 env[1303]: time="2025-05-17T00:40:42.581992981Z" level=info msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.718 [INFO][3532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.740 [INFO][3532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" iface="eth0" netns="/var/run/netns/cni-8374f7c6-e88d-fc6e-ecee-3307a92b2cfd" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.747 [INFO][3532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" iface="eth0" netns="/var/run/netns/cni-8374f7c6-e88d-fc6e-ecee-3307a92b2cfd" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.748 [INFO][3532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" iface="eth0" netns="/var/run/netns/cni-8374f7c6-e88d-fc6e-ecee-3307a92b2cfd" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.748 [INFO][3532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:42.748 [INFO][3532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.091 [INFO][3589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.095 [INFO][3589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.095 [INFO][3589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.119 [WARNING][3589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.119 [INFO][3589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.148 [INFO][3589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:43.158838 env[1303]: 2025-05-17 00:40:43.154 [INFO][3532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:43.161344 env[1303]: time="2025-05-17T00:40:43.161027144Z" level=info msg="TearDown network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" successfully" May 17 00:40:43.161344 env[1303]: time="2025-05-17T00:40:43.161086507Z" level=info msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" returns successfully" May 17 00:40:43.162205 systemd[1]: run-netns-cni\x2d8374f7c6\x2de88d\x2dfc6e\x2decee\x2d3307a92b2cfd.mount: Deactivated successfully. May 17 00:40:43.163548 kubelet[2118]: E0517 00:40:43.162883 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:43.169615 env[1303]: time="2025-05-17T00:40:43.168001204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd4gz,Uid:1a553309-56ea-422b-ad2d-4882053f4a1c,Namespace:kube-system,Attempt:1,}" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.783 [INFO][3580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.783 [INFO][3580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" iface="eth0" netns="/var/run/netns/cni-f6719de0-2db8-6026-5be3-35e2cfe7dc21" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.784 [INFO][3580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" iface="eth0" netns="/var/run/netns/cni-f6719de0-2db8-6026-5be3-35e2cfe7dc21" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.784 [INFO][3580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" iface="eth0" netns="/var/run/netns/cni-f6719de0-2db8-6026-5be3-35e2cfe7dc21" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.784 [INFO][3580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:42.784 [INFO][3580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.100 [INFO][3594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.100 [INFO][3594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.147 [INFO][3594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.175 [WARNING][3594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.175 [INFO][3594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.224 [INFO][3594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:43.236421 env[1303]: 2025-05-17 00:40:43.227 [INFO][3580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:43.239453 env[1303]: time="2025-05-17T00:40:43.239392131Z" level=info msg="TearDown network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" successfully" May 17 00:40:43.239453 env[1303]: time="2025-05-17T00:40:43.239447366Z" level=info msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" returns successfully" May 17 00:40:43.239963 systemd[1]: run-netns-cni\x2df6719de0\x2d2db8\x2d6026\x2d5be3\x2d35e2cfe7dc21.mount: Deactivated successfully. May 17 00:40:43.371993 kubelet[2118]: I0517 00:40:43.371290 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1418ed5-60dc-4d92-83de-664495988962-whisker-ca-bundle\") pod \"c1418ed5-60dc-4d92-83de-664495988962\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " May 17 00:40:43.371993 kubelet[2118]: I0517 00:40:43.372003 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1418ed5-60dc-4d92-83de-664495988962-whisker-backend-key-pair\") pod \"c1418ed5-60dc-4d92-83de-664495988962\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " May 17 00:40:43.372258 kubelet[2118]: I0517 00:40:43.372078 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45nms\" (UniqueName: \"kubernetes.io/projected/c1418ed5-60dc-4d92-83de-664495988962-kube-api-access-45nms\") pod \"c1418ed5-60dc-4d92-83de-664495988962\" (UID: \"c1418ed5-60dc-4d92-83de-664495988962\") " May 17 00:40:43.372258 kubelet[2118]: I0517 00:40:43.371916 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1418ed5-60dc-4d92-83de-664495988962-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c1418ed5-60dc-4d92-83de-664495988962" (UID: "c1418ed5-60dc-4d92-83de-664495988962"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:40:43.372698 kubelet[2118]: I0517 00:40:43.372558 2118 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1418ed5-60dc-4d92-83de-664495988962-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:40:43.436403 kubelet[2118]: I0517 00:40:43.436236 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1418ed5-60dc-4d92-83de-664495988962-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c1418ed5-60dc-4d92-83de-664495988962" (UID: "c1418ed5-60dc-4d92-83de-664495988962"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:40:43.441004 kubelet[2118]: I0517 00:40:43.440948 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1418ed5-60dc-4d92-83de-664495988962-kube-api-access-45nms" (OuterVolumeSpecName: "kube-api-access-45nms") pod "c1418ed5-60dc-4d92-83de-664495988962" (UID: "c1418ed5-60dc-4d92-83de-664495988962"). InnerVolumeSpecName "kube-api-access-45nms". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:40:43.477558 kubelet[2118]: I0517 00:40:43.476241 2118 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1418ed5-60dc-4d92-83de-664495988962-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:40:43.477558 kubelet[2118]: I0517 00:40:43.476273 2118 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45nms\" (UniqueName: \"kubernetes.io/projected/c1418ed5-60dc-4d92-83de-664495988962-kube-api-access-45nms\") on node \"localhost\" DevicePath \"\"" May 17 00:40:43.486125 env[1303]: time="2025-05-17T00:40:43.486066598Z" level=info msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.749 [INFO][3549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.749 [INFO][3549] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" iface="eth0" netns="/var/run/netns/cni-e67205e6-95e3-84d1-6fa0-93745618d608" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.749 [INFO][3549] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" iface="eth0" netns="/var/run/netns/cni-e67205e6-95e3-84d1-6fa0-93745618d608" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.750 [INFO][3549] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" iface="eth0" netns="/var/run/netns/cni-e67205e6-95e3-84d1-6fa0-93745618d608" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.750 [INFO][3549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:42.750 [INFO][3549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.090 [INFO][3591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.101 [INFO][3591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.224 [INFO][3591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.452 [WARNING][3591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.452 [INFO][3591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.472 [INFO][3591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:43.492214 env[1303]: 2025-05-17 00:40:43.483 [INFO][3549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:43.494656 env[1303]: time="2025-05-17T00:40:43.494611242Z" level=info msg="TearDown network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" successfully" May 17 00:40:43.494793 env[1303]: time="2025-05-17T00:40:43.494766229Z" level=info msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" returns successfully" May 17 00:40:43.495922 env[1303]: time="2025-05-17T00:40:43.495884828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-hkhhx,Uid:272cb491-0dd7-4c17-9835-83fe9d59eb06,Namespace:calico-apiserver,Attempt:1,}" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.747 [INFO][3535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.747 [INFO][3535] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" iface="eth0" netns="/var/run/netns/cni-d2c62165-a87d-316f-1968-ea6cb7b1f9f8" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.747 [INFO][3535] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" iface="eth0" netns="/var/run/netns/cni-d2c62165-a87d-316f-1968-ea6cb7b1f9f8" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.749 [INFO][3535] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" iface="eth0" netns="/var/run/netns/cni-d2c62165-a87d-316f-1968-ea6cb7b1f9f8" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.749 [INFO][3535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:42.749 [INFO][3535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.119 [INFO][3592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.120 [INFO][3592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.477 [INFO][3592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.511 [WARNING][3592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.511 [INFO][3592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.520 [INFO][3592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:43.537524 env[1303]: 2025-05-17 00:40:43.532 [INFO][3535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:43.538335 env[1303]: time="2025-05-17T00:40:43.538298998Z" level=info msg="TearDown network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" successfully" May 17 00:40:43.538420 env[1303]: time="2025-05-17T00:40:43.538397576Z" level=info msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" returns successfully" May 17 00:40:43.542560 env[1303]: time="2025-05-17T00:40:43.542499763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd9bb8f8-zxssx,Uid:b53473ef-a278-4a28-a57c-3f8a53828a27,Namespace:calico-system,Attempt:1,}" May 17 00:40:43.701820 systemd[1]: run-containerd-runc-k8s.io-91d8b53621d638b132841871a76978e3b01079f5e8ec8863fd5caa72f6bc7b6a-runc.RQfsJg.mount: Deactivated successfully. May 17 00:40:43.702017 systemd[1]: run-netns-cni\x2de67205e6\x2d95e3\x2d84d1\x2d6fa0\x2d93745618d608.mount: Deactivated successfully. May 17 00:40:43.702160 systemd[1]: run-netns-cni\x2dd2c62165\x2da87d\x2d316f\x2d1968\x2dea6cb7b1f9f8.mount: Deactivated successfully. May 17 00:40:43.702287 systemd[1]: var-lib-kubelet-pods-c1418ed5\x2d60dc\x2d4d92\x2d83de\x2d664495988962-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d45nms.mount: Deactivated successfully. May 17 00:40:43.702427 systemd[1]: var-lib-kubelet-pods-c1418ed5\x2d60dc\x2d4d92\x2d83de\x2d664495988962-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.632 [INFO][3656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.632 [INFO][3656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" iface="eth0" netns="/var/run/netns/cni-4a202086-438c-9adf-7e79-b7e977dbfb98" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.633 [INFO][3656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" iface="eth0" netns="/var/run/netns/cni-4a202086-438c-9adf-7e79-b7e977dbfb98" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.633 [INFO][3656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" iface="eth0" netns="/var/run/netns/cni-4a202086-438c-9adf-7e79-b7e977dbfb98" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.633 [INFO][3656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.633 [INFO][3656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.787 [INFO][3687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.787 [INFO][3687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.787 [INFO][3687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.809 [WARNING][3687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.809 [INFO][3687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.854 [INFO][3687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:43.874325 env[1303]: 2025-05-17 00:40:43.862 [INFO][3656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:43.883858 systemd[1]: run-netns-cni\x2d4a202086\x2d438c\x2d9adf\x2d7e79\x2db7e977dbfb98.mount: Deactivated successfully. May 17 00:40:43.899952 env[1303]: time="2025-05-17T00:40:43.899884446Z" level=info msg="TearDown network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" successfully" May 17 00:40:43.900192 env[1303]: time="2025-05-17T00:40:43.900171165Z" level=info msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" returns successfully" May 17 00:40:43.901165 env[1303]: time="2025-05-17T00:40:43.901066718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-lwqn5,Uid:11364172-e58d-4414-85d0-e2ab3fb5d624,Namespace:calico-apiserver,Attempt:1,}" May 17 00:40:43.992219 kubelet[2118]: I0517 00:40:43.991916 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpjm\" (UniqueName: \"kubernetes.io/projected/89fcd0a8-8017-46e2-b5fb-22df060c0c43-kube-api-access-qtpjm\") pod \"whisker-bc8568d5-d7fx5\" (UID: \"89fcd0a8-8017-46e2-b5fb-22df060c0c43\") " pod="calico-system/whisker-bc8568d5-d7fx5" May 17 00:40:43.992219 kubelet[2118]: I0517 00:40:43.991975 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89fcd0a8-8017-46e2-b5fb-22df060c0c43-whisker-ca-bundle\") pod \"whisker-bc8568d5-d7fx5\" (UID: \"89fcd0a8-8017-46e2-b5fb-22df060c0c43\") " pod="calico-system/whisker-bc8568d5-d7fx5" May 17 00:40:43.992219 kubelet[2118]: I0517 00:40:43.992002 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89fcd0a8-8017-46e2-b5fb-22df060c0c43-whisker-backend-key-pair\") pod \"whisker-bc8568d5-d7fx5\" (UID: \"89fcd0a8-8017-46e2-b5fb-22df060c0c43\") " pod="calico-system/whisker-bc8568d5-d7fx5" May 17 00:40:44.453367 env[1303]: time="2025-05-17T00:40:44.452646471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc8568d5-d7fx5,Uid:89fcd0a8-8017-46e2-b5fb-22df060c0c43,Namespace:calico-system,Attempt:0,}" May 17 00:40:44.488705 env[1303]: time="2025-05-17T00:40:44.487387112Z" level=info msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" May 17 00:40:44.492754 kubelet[2118]: I0517 00:40:44.489247 2118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1418ed5-60dc-4d92-83de-664495988962" path="/var/lib/kubelet/pods/c1418ed5-60dc-4d92-83de-664495988962/volumes" May 17 00:40:44.650768 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:40:44.650972 kernel: audit: type=1400 audit(1747442444.640:302): avc: denied { write } for pid=3817 comm="tee" name="fd" dev="proc" ino=24243 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.640000 audit[3817]: AVC avc: denied { write } for pid=3817 comm="tee" name="fd" dev="proc" ino=24243 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.662207 kernel: audit: type=1300 audit(1747442444.640:302): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe741117e9 a2=241 a3=1b6 items=1 ppid=3798 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.640000 audit[3817]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe741117e9 a2=241 a3=1b6 items=1 ppid=3798 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.640000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:40:44.671887 kernel: audit: type=1307 audit(1747442444.640:302): cwd="/etc/service/enabled/bird6/log" May 17 00:40:44.640000 audit: PATH item=0 name="/dev/fd/63" inode=24216 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.706602 kernel: audit: type=1302 audit(1747442444.640:302): item=0 name="/dev/fd/63" inode=24216 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.706704 kernel: audit: type=1327 audit(1747442444.640:302): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.737198 kernel: audit: type=1400 audit(1747442444.720:303): avc: denied { write } for pid=3842 comm="tee" name="fd" dev="proc" ino=23547 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.737357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:40:44.737401 kernel: audit: type=1300 audit(1747442444.720:303): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff397cd7eb a2=241 a3=1b6 items=1 ppid=3800 pid=3842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.737427 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaf0b415252c: link becomes ready May 17 00:40:44.720000 audit[3842]: AVC avc: denied { write } for pid=3842 comm="tee" name="fd" dev="proc" ino=23547 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.720000 audit[3842]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff397cd7eb a2=241 a3=1b6 items=1 ppid=3800 pid=3842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.724598 systemd-networkd[1079]: caliaf0b415252c: Link UP May 17 00:40:44.745708 kernel: audit: type=1307 audit(1747442444.720:303): cwd="/etc/service/enabled/cni/log" May 17 00:40:44.720000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:40:44.761303 kernel: audit: type=1302 audit(1747442444.720:303): item=0 name="/dev/fd/63" inode=25156 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.761795 kernel: audit: type=1327 audit(1747442444.720:303): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.720000 audit: PATH item=0 name="/dev/fd/63" inode=25156 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.752590 systemd-networkd[1079]: caliaf0b415252c: Gained carrier May 17 00:40:44.760965 systemd[1]: Started sshd@8-10.0.0.136:22-10.0.0.1:50796.service. May 17 00:40:44.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.136:22-10.0.0.1:50796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:44.788000 audit[3880]: AVC avc: denied { write } for pid=3880 comm="tee" name="fd" dev="proc" ino=24259 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.788000 audit[3880]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc9d9387e9 a2=241 a3=1b6 items=1 ppid=3822 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.788000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:40:44.788000 audit: PATH item=0 name="/dev/fd/63" inode=25178 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.790000 audit[3888]: AVC avc: denied { write } for pid=3888 comm="tee" name="fd" dev="proc" ino=24263 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.790000 audit[3888]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdb5137d9 a2=241 a3=1b6 items=1 ppid=3826 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.790000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:40:44.790000 audit: PATH item=0 name="/dev/fd/63" inode=25188 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.801000 audit[3836]: AVC avc: denied { write } for pid=3836 comm="tee" name="fd" dev="proc" ino=26443 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.801000 audit[3836]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaf4577ea a2=241 a3=1b6 items=1 ppid=3803 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.801000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:40:44.801000 audit: PATH item=0 name="/dev/fd/63" inode=23540 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.811000 audit[3899]: AVC avc: denied { write } for pid=3899 comm="tee" name="fd" dev="proc" ino=26447 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.811000 audit[3899]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb2b657da a2=241 a3=1b6 items=1 ppid=3823 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.811000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:40:44.811000 audit: PATH item=0 name="/dev/fd/63" inode=24267 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.841829 systemd-networkd[1079]: califcd07a666c0: Link UP May 17 00:40:44.849058 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califcd07a666c0: link becomes ready May 17 00:40:44.845723 systemd-networkd[1079]: califcd07a666c0: Gained carrier May 17 00:40:44.849000 audit[3882]: USER_ACCT pid=3882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:44.854000 audit[3882]: CRED_ACQ pid=3882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:44.855418 sshd[3882]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:44.859000 audit[3882]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0b721400 a2=3 a3=0 items=0 ppid=1 pid=3882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.859000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.594 [INFO][3674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.647 [INFO][3674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0 coredns-7c65d6cfc9- kube-system 1a553309-56ea-422b-ad2d-4882053f4a1c 963 0 2025-05-17 00:39:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qd4gz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf0b415252c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.647 [INFO][3674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.828 [INFO][3719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" HandleID="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.829 [INFO][3719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" HandleID="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qd4gz", "timestamp":"2025-05-17 00:40:43.828805226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.829 [INFO][3719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.862 [INFO][3719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:43.863 [INFO][3719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.057 [INFO][3719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.433 [INFO][3719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.463 [INFO][3719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.469 [INFO][3719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.476 [INFO][3719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.476 [INFO][3719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.482 [INFO][3719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082 May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.525 [INFO][3719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.561 [INFO][3719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.561 [INFO][3719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" host="localhost" May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.561 [INFO][3719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:44.864285 env[1303]: 2025-05-17 00:40:44.561 [INFO][3719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" HandleID="k8s-pod-network.dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.570 [INFO][3674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a553309-56ea-422b-ad2d-4882053f4a1c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qd4gz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf0b415252c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.575 [INFO][3674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.577 [INFO][3674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf0b415252c ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.756 [INFO][3674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.763 [INFO][3674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a553309-56ea-422b-ad2d-4882053f4a1c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082", Pod:"coredns-7c65d6cfc9-qd4gz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf0b415252c", MAC:"3a:06:38:df:25:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:44.864944 env[1303]: 2025-05-17 00:40:44.835 [INFO][3674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qd4gz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:44.869035 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:44.879645 systemd[1]: Started session-9.scope. May 17 00:40:44.881422 systemd-logind[1293]: New session 9 of user core. May 17 00:40:44.904000 audit[3882]: USER_START pid=3882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.731 [INFO][3692] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.773 [INFO][3692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0 calico-apiserver-5d67ffffc- calico-apiserver 272cb491-0dd7-4c17-9835-83fe9d59eb06 964 0 2025-05-17 00:40:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d67ffffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d67ffffc-hkhhx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcd07a666c0 [] [] }} ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.773 [INFO][3692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.934 [INFO][3733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.934 [INFO][3733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d67ffffc-hkhhx", "timestamp":"2025-05-17 00:40:43.93451618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:43.941 [INFO][3733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.561 [INFO][3733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.562 [INFO][3733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.593 [INFO][3733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.622 [INFO][3733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.653 [INFO][3733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.685 [INFO][3733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.712 [INFO][3733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.712 [INFO][3733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.745 [INFO][3733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.760 [INFO][3733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.824 [INFO][3733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.824 [INFO][3733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" host="localhost" May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.824 [INFO][3733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:44.908224 env[1303]: 2025-05-17 00:40:44.824 [INFO][3733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.833 [INFO][3692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"272cb491-0dd7-4c17-9835-83fe9d59eb06", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d67ffffc-hkhhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcd07a666c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.838 [INFO][3692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.838 [INFO][3692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcd07a666c0 ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.846 [INFO][3692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.847 [INFO][3692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"272cb491-0dd7-4c17-9835-83fe9d59eb06", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d", Pod:"calico-apiserver-5d67ffffc-hkhhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcd07a666c0", MAC:"72:c0:13:0b:10:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:44.908985 env[1303]: 2025-05-17 00:40:44.899 [INFO][3692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-hkhhx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:44.917000 audit[3910]: CRED_ACQ pid=3910 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:44.935000 audit[3886]: AVC avc: denied { write } for pid=3886 comm="tee" name="fd" dev="proc" ino=26650 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:40:44.935000 audit[3886]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec12067e9 a2=241 a3=1b6 items=1 ppid=3821 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:44.935000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:40:44.935000 audit: PATH item=0 name="/dev/fd/63" inode=25187 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:44.935000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:40:44.982025 env[1303]: time="2025-05-17T00:40:44.974417635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:44.982292 env[1303]: time="2025-05-17T00:40:44.978658803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:44.982292 env[1303]: time="2025-05-17T00:40:44.978698880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:44.982292 env[1303]: time="2025-05-17T00:40:44.978941394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082 pid=3934 runtime=io.containerd.runc.v2 May 17 00:40:45.019831 env[1303]: time="2025-05-17T00:40:45.018957946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:45.020495 env[1303]: time="2025-05-17T00:40:45.020029223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:45.021902 env[1303]: time="2025-05-17T00:40:45.021846807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:45.029782 env[1303]: time="2025-05-17T00:40:45.029251258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d pid=3952 runtime=io.containerd.runc.v2 May 17 00:40:45.086403 systemd-networkd[1079]: caliab9e590488a: Link UP May 17 00:40:45.095444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliab9e590488a: link becomes ready May 17 00:40:45.094989 systemd-networkd[1079]: caliab9e590488a: Gained carrier May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:43.787 [INFO][3711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.047 [INFO][3711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0 calico-kube-controllers-ddd9bb8f8- calico-system b53473ef-a278-4a28-a57c-3f8a53828a27 965 0 2025-05-17 00:40:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ddd9bb8f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-ddd9bb8f8-zxssx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab9e590488a [] [] }} ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.047 [INFO][3711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.177 [INFO][3742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" HandleID="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.179 [INFO][3742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" HandleID="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00023d620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-ddd9bb8f8-zxssx", "timestamp":"2025-05-17 00:40:44.177742187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.179 [INFO][3742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.824 [INFO][3742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.824 [INFO][3742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.855 [INFO][3742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.926 [INFO][3742] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:44.966 [INFO][3742] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.010 [INFO][3742] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.019 [INFO][3742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.019 [INFO][3742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.022 [INFO][3742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.035 [INFO][3742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.053 [INFO][3742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.053 [INFO][3742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" host="localhost" May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.053 [INFO][3742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:45.156581 env[1303]: 2025-05-17 00:40:45.053 [INFO][3742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" HandleID="k8s-pod-network.c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.058 [INFO][3711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0", GenerateName:"calico-kube-controllers-ddd9bb8f8-", Namespace:"calico-system", SelfLink:"", UID:"b53473ef-a278-4a28-a57c-3f8a53828a27", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd9bb8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-ddd9bb8f8-zxssx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab9e590488a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.058 [INFO][3711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.058 [INFO][3711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab9e590488a ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.094 [INFO][3711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.095 [INFO][3711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0", GenerateName:"calico-kube-controllers-ddd9bb8f8-", Namespace:"calico-system", SelfLink:"", UID:"b53473ef-a278-4a28-a57c-3f8a53828a27", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd9bb8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb", Pod:"calico-kube-controllers-ddd9bb8f8-zxssx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab9e590488a", MAC:"5e:67:ce:57:cc:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.157606 env[1303]: 2025-05-17 00:40:45.148 [INFO][3711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb" Namespace="calico-system" Pod="calico-kube-controllers-ddd9bb8f8-zxssx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:45.204037 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:45.217843 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:45.277270 env[1303]: time="2025-05-17T00:40:45.277133848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qd4gz,Uid:1a553309-56ea-422b-ad2d-4882053f4a1c,Namespace:kube-system,Attempt:1,} returns sandbox id \"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082\"" May 17 00:40:45.278775 kubelet[2118]: E0517 00:40:45.278247 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:45.284046 env[1303]: time="2025-05-17T00:40:45.280683853Z" level=info msg="CreateContainer within sandbox \"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:40:45.287542 env[1303]: time="2025-05-17T00:40:45.287091268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:45.287542 env[1303]: time="2025-05-17T00:40:45.287265310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:45.287542 env[1303]: time="2025-05-17T00:40:45.287353058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:45.287769 env[1303]: time="2025-05-17T00:40:45.287612423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb pid=4033 runtime=io.containerd.runc.v2 May 17 00:40:45.385278 sshd[3882]: pam_unix(sshd:session): session closed for user core May 17 00:40:45.395000 audit[3882]: USER_END pid=3882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:45.395000 audit[3882]: CRED_DISP pid=3882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:45.402152 systemd[1]: sshd@8-10.0.0.136:22-10.0.0.1:50796.service: Deactivated successfully. May 17 00:40:45.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.136:22-10.0.0.1:50796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:45.404009 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:40:45.408706 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. May 17 00:40:45.410129 systemd-logind[1293]: Removed session 9. May 17 00:40:45.438176 env[1303]: time="2025-05-17T00:40:45.438134602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-hkhhx,Uid:272cb491-0dd7-4c17-9835-83fe9d59eb06,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\"" May 17 00:40:45.442215 env[1303]: time="2025-05-17T00:40:45.442180985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:40:45.443139 systemd-networkd[1079]: cali126211cea7a: Link UP May 17 00:40:45.456895 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali126211cea7a: link becomes ready May 17 00:40:45.449790 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:45.453343 systemd-networkd[1079]: cali126211cea7a: Gained carrier May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.863 [INFO][3782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.863 [INFO][3782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" iface="eth0" netns="/var/run/netns/cni-1c40de7f-29c5-a36c-e5ef-f7e7ee78629b" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.864 [INFO][3782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" iface="eth0" netns="/var/run/netns/cni-1c40de7f-29c5-a36c-e5ef-f7e7ee78629b" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.870 [INFO][3782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" iface="eth0" netns="/var/run/netns/cni-1c40de7f-29c5-a36c-e5ef-f7e7ee78629b" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.870 [INFO][3782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:44.870 [INFO][3782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.032 [INFO][3908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.032 [INFO][3908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.363 [INFO][3908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.427 [WARNING][3908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.427 [INFO][3908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.443 [INFO][3908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:45.483824 env[1303]: 2025-05-17 00:40:45.472 [INFO][3782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:45.495168 env[1303]: time="2025-05-17T00:40:45.495059199Z" level=info msg="TearDown network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" successfully" May 17 00:40:45.495168 env[1303]: time="2025-05-17T00:40:45.495148109Z" level=info msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" returns successfully" May 17 00:40:45.498996 env[1303]: time="2025-05-17T00:40:45.497003475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-6mszs,Uid:4c9eee5c-cc38-4032-8b07-e8d97094a990,Namespace:calico-system,Attempt:1,}" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.417 [INFO][3749] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.455 [INFO][3749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0 calico-apiserver-5d67ffffc- calico-apiserver 11364172-e58d-4414-85d0-e2ab3fb5d624 978 0 2025-05-17 00:40:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d67ffffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d67ffffc-lwqn5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali126211cea7a [] [] }} ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.456 [INFO][3749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.545 [INFO][3765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.546 [INFO][3765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d67ffffc-lwqn5", "timestamp":"2025-05-17 00:40:44.545499108 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:44.546 [INFO][3765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.053 [INFO][3765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.054 [INFO][3765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.099 [INFO][3765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.145 [INFO][3765] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.227 [INFO][3765] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.244 [INFO][3765] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.274 [INFO][3765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.274 [INFO][3765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.291 [INFO][3765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863 May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.315 [INFO][3765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.363 [INFO][3765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.363 [INFO][3765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" host="localhost" May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.363 [INFO][3765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:45.515005 env[1303]: 2025-05-17 00:40:45.363 [INFO][3765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.369 [INFO][3749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"11364172-e58d-4414-85d0-e2ab3fb5d624", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d67ffffc-lwqn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126211cea7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.383 [INFO][3749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.384 [INFO][3749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali126211cea7a ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.461 [INFO][3749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.462 [INFO][3749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"11364172-e58d-4414-85d0-e2ab3fb5d624", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863", Pod:"calico-apiserver-5d67ffffc-lwqn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126211cea7a", MAC:"0e:bb:ba:cb:70:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.515792 env[1303]: 2025-05-17 00:40:45.511 [INFO][3749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Namespace="calico-apiserver" Pod="calico-apiserver-5d67ffffc-lwqn5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:45.516000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.516000 audit: BPF prog-id=10 op=LOAD May 17 00:40:45.516000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3e436be0 a2=98 a3=3 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.516000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.521000 audit: BPF prog-id=10 op=UNLOAD May 17 00:40:45.522000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.528466 env[1303]: time="2025-05-17T00:40:45.525081714Z" level=info msg="CreateContainer within sandbox \"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9022f173f19c9bf462e7e6481327470310109f929b71e306b3a1b611b6cdaf53\"" May 17 00:40:45.528466 env[1303]: time="2025-05-17T00:40:45.525989188Z" level=info msg="StartContainer for \"9022f173f19c9bf462e7e6481327470310109f929b71e306b3a1b611b6cdaf53\"" May 17 00:40:45.522000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.522000 audit: BPF prog-id=11 op=LOAD May 17 00:40:45.522000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3e4369d0 a2=94 a3=54428f items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.522000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.547000 audit: BPF prog-id=11 op=UNLOAD May 17 00:40:45.547000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.547000 audit: BPF prog-id=12 op=LOAD May 17 00:40:45.547000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3e436a00 a2=94 a3=2 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.547000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.569000 audit: BPF prog-id=12 op=UNLOAD May 17 00:40:45.578602 env[1303]: time="2025-05-17T00:40:45.562690276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd9bb8f8-zxssx,Uid:b53473ef-a278-4a28-a57c-3f8a53828a27,Namespace:calico-system,Attempt:1,} returns sandbox id \"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb\"" May 17 00:40:45.595588 env[1303]: time="2025-05-17T00:40:45.591606087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:45.595588 env[1303]: time="2025-05-17T00:40:45.591655741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:45.595588 env[1303]: time="2025-05-17T00:40:45.591666903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:45.595588 env[1303]: time="2025-05-17T00:40:45.591955004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863 pid=4146 runtime=io.containerd.runc.v2 May 17 00:40:45.673611 env[1303]: time="2025-05-17T00:40:45.671248049Z" level=info msg="StartContainer for \"9022f173f19c9bf462e7e6481327470310109f929b71e306b3a1b611b6cdaf53\" returns successfully" May 17 00:40:45.681998 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:45.730894 systemd[1]: run-netns-cni\x2d1c40de7f\x2d29c5\x2da36c\x2de5ef\x2df7e7ee78629b.mount: Deactivated successfully. May 17 00:40:45.790499 env[1303]: time="2025-05-17T00:40:45.788518400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d67ffffc-lwqn5,Uid:11364172-e58d-4414-85d0-e2ab3fb5d624,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\"" May 17 00:40:45.811817 systemd-networkd[1079]: caliaf0b415252c: Gained IPv6LL May 17 00:40:45.830519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:40:45.830888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0f627461a63: link becomes ready May 17 00:40:45.829492 systemd-networkd[1079]: cali0f627461a63: Link UP May 17 00:40:45.829667 systemd-networkd[1079]: cali0f627461a63: Gained carrier May 17 00:40:45.859000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.859000 audit: BPF prog-id=13 op=LOAD May 17 00:40:45.859000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3e4368c0 a2=94 a3=1 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.859000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.865000 audit: BPF prog-id=13 op=UNLOAD May 17 00:40:45.865000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.865000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd3e436990 a2=50 a3=7ffd3e436a70 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.865000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e4368d0 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3e436900 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3e436810 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e436920 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e436900 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e4368f0 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e436920 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3e436900 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3e436920 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd3e4368f0 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd3e436960 a2=28 a3=0 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3e436710 a2=50 a3=1 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.886000 audit: BPF prog-id=14 op=LOAD May 17 00:40:45.886000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3e436710 a2=94 a3=5 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.887000 audit: BPF prog-id=14 op=UNLOAD May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd3e4367c0 a2=50 a3=1 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd3e4368e0 a2=4 a3=38 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:45.887000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3e436930 a2=94 a3=6 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:45.887000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3e4360e0 a2=94 a3=88 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { perfmon } for pid=4104 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { bpf } for pid=4104 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.887000 audit[4104]: AVC avc: denied { confidentiality } for pid=4104 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:45.887000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd3e4360e0 a2=94 a3=88 items=0 ppid=3837 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.887000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:40:45.957000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.957000 audit: BPF prog-id=15 op=LOAD May 17 00:40:45.957000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee1216b40 a2=98 a3=1999999999999999 items=0 ppid=3837 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.957000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:40:45.965000 audit: BPF prog-id=15 op=UNLOAD May 17 00:40:45.965000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:44.826 [INFO][3857] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:44.917 [INFO][3857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bc8568d5--d7fx5-eth0 whisker-bc8568d5- calico-system 89fcd0a8-8017-46e2-b5fb-22df060c0c43 996 0 2025-05-17 00:40:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bc8568d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bc8568d5-d7fx5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0f627461a63 [] [] }} ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:44.917 [INFO][3857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.255 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" HandleID="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Workload="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.256 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" HandleID="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Workload="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bc8568d5-d7fx5", "timestamp":"2025-05-17 00:40:45.255912644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.256 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.446 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.446 [INFO][3975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.494 [INFO][3975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.593 [INFO][3975] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.656 [INFO][3975] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.670 [INFO][3975] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.673 [INFO][3975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.673 [INFO][3975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.695 [INFO][3975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9 May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.728 [INFO][3975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.756 [INFO][3975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.756 [INFO][3975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" host="localhost" May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.756 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:45.969270 env[1303]: 2025-05-17 00:40:45.756 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" HandleID="k8s-pod-network.a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Workload="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.783 [INFO][3857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bc8568d5--d7fx5-eth0", GenerateName:"whisker-bc8568d5-", Namespace:"calico-system", SelfLink:"", UID:"89fcd0a8-8017-46e2-b5fb-22df060c0c43", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc8568d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bc8568d5-d7fx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f627461a63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.784 [INFO][3857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.784 [INFO][3857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f627461a63 ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.827 [INFO][3857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.827 [INFO][3857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bc8568d5--d7fx5-eth0", GenerateName:"whisker-bc8568d5-", Namespace:"calico-system", SelfLink:"", UID:"89fcd0a8-8017-46e2-b5fb-22df060c0c43", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc8568d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9", Pod:"whisker-bc8568d5-d7fx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f627461a63", MAC:"e6:47:8d:64:06:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:45.970921 env[1303]: 2025-05-17 00:40:45.952 [INFO][3857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9" Namespace="calico-system" Pod="whisker-bc8568d5-d7fx5" WorkloadEndpoint="localhost-k8s-whisker--bc8568d5--d7fx5-eth0" May 17 00:40:45.965000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.965000 audit: BPF prog-id=16 op=LOAD May 17 00:40:45.965000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee1216a20 a2=94 a3=ffff items=0 ppid=3837 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.965000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:40:45.972000 audit: BPF prog-id=16 op=UNLOAD May 17 00:40:45.972000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { perfmon } for pid=4230 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit[4230]: AVC avc: denied { bpf } for pid=4230 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:45.972000 audit: BPF prog-id=17 op=LOAD May 17 00:40:45.972000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee1216a60 a2=94 a3=7ffee1216c40 items=0 ppid=3837 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:45.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:40:45.973000 audit: BPF prog-id=17 op=UNLOAD May 17 00:40:46.083230 env[1303]: time="2025-05-17T00:40:46.083145807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:46.083459 env[1303]: time="2025-05-17T00:40:46.083430631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:46.083611 env[1303]: time="2025-05-17T00:40:46.083583793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:46.088764 env[1303]: time="2025-05-17T00:40:46.088682242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9 pid=4257 runtime=io.containerd.runc.v2 May 17 00:40:46.187346 systemd-networkd[1079]: califcd07a666c0: Gained IPv6LL May 17 00:40:46.217288 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:46.236218 systemd-networkd[1079]: cali5090752f7db: Link UP May 17 00:40:46.245208 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5090752f7db: link becomes ready May 17 00:40:46.252288 systemd-networkd[1079]: cali5090752f7db: Gained carrier May 17 00:40:46.268930 systemd-networkd[1079]: vxlan.calico: Link UP May 17 00:40:46.268938 systemd-networkd[1079]: vxlan.calico: Gained carrier May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.808 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0 goldmane-8f77d7b6c- calico-system 4c9eee5c-cc38-4032-8b07-e8d97094a990 1007 0 2025-05-17 00:40:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-6mszs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5090752f7db [] [] }} ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.809 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.991 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" HandleID="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.991 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" HandleID="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-6mszs", "timestamp":"2025-05-17 00:40:45.991425179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.991 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.991 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:45.992 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.010 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.032 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.066 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.081 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.106 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.106 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.125 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.160 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.216 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.216 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" host="localhost" May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.216 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:46.292949 env[1303]: 2025-05-17 00:40:46.216 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" HandleID="k8s-pod-network.9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.221 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4c9eee5c-cc38-4032-8b07-e8d97094a990", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-6mszs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5090752f7db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.221 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.221 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5090752f7db ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.237 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.240 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4c9eee5c-cc38-4032-8b07-e8d97094a990", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e", Pod:"goldmane-8f77d7b6c-6mszs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5090752f7db", MAC:"7a:6b:fc:8f:25:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:46.294250 env[1303]: 2025-05-17 00:40:46.280 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e" Namespace="calico-system" Pod="goldmane-8f77d7b6c-6mszs" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:46.341465 env[1303]: time="2025-05-17T00:40:46.339669030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc8568d5-d7fx5,Uid:89fcd0a8-8017-46e2-b5fb-22df060c0c43,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6396f88a8d08fcd2ab1b2ed394b399ce554e9a304b38deea4f25ea5246654b9\"" May 17 00:40:46.355000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.355000 audit: BPF prog-id=18 op=LOAD May 17 00:40:46.373691 env[1303]: time="2025-05-17T00:40:46.373620275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:46.373884 env[1303]: time="2025-05-17T00:40:46.373857569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:46.374014 env[1303]: time="2025-05-17T00:40:46.373989470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:46.374342 env[1303]: time="2025-05-17T00:40:46.374315454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e pid=4320 runtime=io.containerd.runc.v2 May 17 00:40:46.355000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdaad44270 a2=98 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.355000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.366000 audit: BPF prog-id=18 op=UNLOAD May 17 00:40:46.366000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.366000 audit: BPF prog-id=19 op=LOAD May 17 00:40:46.366000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdaad44080 a2=94 a3=54428f items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.366000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.383000 audit: BPF prog-id=19 op=UNLOAD May 17 00:40:46.383000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.408041 kubelet[2118]: E0517 00:40:46.407045 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:46.383000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.383000 audit: BPF prog-id=20 op=LOAD May 17 00:40:46.383000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdaad440b0 a2=94 a3=2 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.383000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit: BPF prog-id=20 op=UNLOAD May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad43f80 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdaad43fb0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdaad43ec0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad43fd0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad43fb0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad43fa0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad43fd0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdaad43fb0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdaad43fd0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdaad43fa0 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdaad44010 a2=28 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.416000 audit: BPF prog-id=21 op=LOAD May 17 00:40:46.416000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdaad43e80 a2=94 a3=0 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.447000 audit: BPF prog-id=21 op=UNLOAD May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdaad43e70 a2=50 a3=2800 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.447000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdaad43e70 a2=50 a3=2800 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.447000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.447000 audit: BPF prog-id=22 op=LOAD May 17 00:40:46.447000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdaad43690 a2=94 a3=2 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.447000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.453000 audit: BPF prog-id=22 op=UNLOAD May 17 00:40:46.453000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { perfmon } for pid=4321 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit[4321]: AVC avc: denied { bpf } for pid=4321 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.453000 audit: BPF prog-id=23 op=LOAD May 17 00:40:46.453000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdaad43790 a2=94 a3=30 items=0 ppid=3837 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.453000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:40:46.491770 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:46.508425 env[1303]: time="2025-05-17T00:40:46.507552098Z" level=info msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" May 17 00:40:46.509197 env[1303]: time="2025-05-17T00:40:46.509168085Z" level=info msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" May 17 00:40:46.511503 env[1303]: time="2025-05-17T00:40:46.509765968Z" level=info msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" May 17 00:40:46.531000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.531000 audit: BPF prog-id=24 op=LOAD May 17 00:40:46.531000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe6f21fc0 a2=98 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.531000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:46.544000 audit: BPF prog-id=24 op=UNLOAD May 17 00:40:46.544000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.570000 audit[4395]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:46.570000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea5d86bd0 a2=0 a3=7ffea5d86bbc items=0 ppid=2242 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:46.584000 audit[4395]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:46.584000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffea5d86bd0 a2=0 a3=0 items=0 ppid=2242 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:46.544000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.544000 audit: BPF prog-id=25 op=LOAD May 17 00:40:46.544000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe6f21db0 a2=94 a3=54428f items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.544000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:46.589000 audit: BPF prog-id=25 op=UNLOAD May 17 00:40:46.589000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.599132 kubelet[2118]: I0517 00:40:46.599065 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qd4gz" podStartSLOduration=49.599044602 podStartE2EDuration="49.599044602s" podCreationTimestamp="2025-05-17 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:46.49181135 +0000 UTC m=+56.090259182" watchObservedRunningTime="2025-05-17 00:40:46.599044602 +0000 UTC m=+56.197492404" May 17 00:40:46.589000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.589000 audit: BPF prog-id=26 op=LOAD May 17 00:40:46.589000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe6f21de0 a2=94 a3=2 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.589000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:46.600000 audit: BPF prog-id=26 op=UNLOAD May 17 00:40:46.726000 audit[4417]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=4417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:46.726000 audit[4417]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcefd13c60 a2=0 a3=7ffcefd13c4c items=0 ppid=2242 pid=4417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:46.750233 env[1303]: time="2025-05-17T00:40:46.750184046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-6mszs,Uid:4c9eee5c-cc38-4032-8b07-e8d97094a990,Namespace:calico-system,Attempt:1,} returns sandbox id \"9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e\"" May 17 00:40:46.752000 audit[4417]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=4417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:46.752000 audit[4417]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcefd13c60 a2=0 a3=7ffcefd13c4c items=0 ppid=2242 pid=4417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:46.766973 systemd-networkd[1079]: caliab9e590488a: Gained IPv6LL May 17 00:40:46.896716 systemd-networkd[1079]: cali126211cea7a: Gained IPv6LL May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.791 [INFO][4396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.791 [INFO][4396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" iface="eth0" netns="/var/run/netns/cni-c5b58bbc-219e-1ca7-418d-d9dda307cb0e" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.792 [INFO][4396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" iface="eth0" netns="/var/run/netns/cni-c5b58bbc-219e-1ca7-418d-d9dda307cb0e" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.792 [INFO][4396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" iface="eth0" netns="/var/run/netns/cni-c5b58bbc-219e-1ca7-418d-d9dda307cb0e" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.792 [INFO][4396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.792 [INFO][4396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.845 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.846 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.846 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.893 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.893 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.911 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:46.920890 env[1303]: 2025-05-17 00:40:46.914 [INFO][4396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:46.930941 systemd[1]: run-netns-cni\x2dc5b58bbc\x2d219e\x2d1ca7\x2d418d\x2dd9dda307cb0e.mount: Deactivated successfully. May 17 00:40:46.954875 env[1303]: time="2025-05-17T00:40:46.954789002Z" level=info msg="TearDown network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" successfully" May 17 00:40:46.955120 env[1303]: time="2025-05-17T00:40:46.955077112Z" level=info msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" returns successfully" May 17 00:40:46.963658 env[1303]: time="2025-05-17T00:40:46.963593299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t72dz,Uid:df42f368-756b-4bd3-8365-0200df6a0484,Namespace:calico-system,Attempt:1,}" May 17 00:40:46.971000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.971000 audit: BPF prog-id=27 op=LOAD May 17 00:40:46.971000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffe6f21ca0 a2=94 a3=1 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.971000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:46.992000 audit: BPF prog-id=27 op=UNLOAD May 17 00:40:46.992000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:46.992000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffe6f21d70 a2=50 a3=7fffe6f21e50 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:46.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.007000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.007000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21cb0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.007000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.007000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffe6f21ce0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.007000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.007000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffe6f21bf0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.007000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.007000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21d00 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.007000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.007000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21ce0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21cd0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21d00 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffe6f21ce0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffe6f21d00 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffe6f21cd0 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffe6f21d40 a2=28 a3=0 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffe6f21af0 a2=50 a3=1 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.013000 audit: BPF prog-id=28 op=LOAD May 17 00:40:47.013000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe6f21af0 a2=94 a3=5 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.013000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.026000 audit: BPF prog-id=28 op=UNLOAD May 17 00:40:47.026000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.026000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffe6f21ba0 a2=50 a3=1 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.026000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.026000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.026000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffe6f21cc0 a2=4 a3=38 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.026000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.862 [INFO][4381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.862 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" iface="eth0" netns="/var/run/netns/cni-caca8c53-0a77-7592-c645-0844fe24efc5" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.862 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" iface="eth0" netns="/var/run/netns/cni-caca8c53-0a77-7592-c645-0844fe24efc5" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.863 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" iface="eth0" netns="/var/run/netns/cni-caca8c53-0a77-7592-c645-0844fe24efc5" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.863 [INFO][4381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.863 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.917 [INFO][4439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.921 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.929 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.979 [WARNING][4439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:46.979 [INFO][4439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:47.000 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:47.027819 env[1303]: 2025-05-17 00:40:47.002 [INFO][4381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:47.027000 audit[4371]: AVC avc: denied { confidentiality } for pid=4371 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:47.027000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffe6f21d10 a2=94 a3=6 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.027000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { confidentiality } for pid=4371 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:47.031000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffe6f214c0 a2=94 a3=88 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { perfmon } for pid=4371 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.031000 audit[4371]: AVC avc: denied { confidentiality } for pid=4371 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:40:47.031000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffe6f214c0 a2=94 a3=88 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.031000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.032000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.032000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffe6f22ef0 a2=10 a3=208 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.032000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.032000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffe6f22d90 a2=10 a3=3 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.032000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.032000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffe6f22d30 a2=10 a3=3 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.032000 audit[4371]: AVC avc: denied { bpf } for pid=4371 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:40:47.032000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffe6f22d30 a2=10 a3=7 items=0 ppid=3837 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.032000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:40:47.051401 env[1303]: time="2025-05-17T00:40:47.033324918Z" level=info msg="TearDown network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" successfully" May 17 00:40:47.051401 env[1303]: time="2025-05-17T00:40:47.033377288Z" level=info msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" returns successfully" May 17 00:40:47.051502 kubelet[2118]: E0517 00:40:47.033776 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:47.041561 systemd[1]: run-netns-cni\x2dcaca8c53\x2d0a77\x2d7592\x2dc645\x2d0844fe24efc5.mount: Deactivated successfully. May 17 00:40:47.052321 env[1303]: time="2025-05-17T00:40:47.052262339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2z6nm,Uid:e17ba7d8-0fd1-43ad-968d-26d53c711122,Namespace:kube-system,Attempt:1,}" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.879 [INFO][4408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.880 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" iface="eth0" netns="/var/run/netns/cni-56a52b0f-105b-0d41-5068-3e9085619741" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.880 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" iface="eth0" netns="/var/run/netns/cni-56a52b0f-105b-0d41-5068-3e9085619741" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.880 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" iface="eth0" netns="/var/run/netns/cni-56a52b0f-105b-0d41-5068-3e9085619741" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.880 [INFO][4408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.880 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.960 [INFO][4445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.960 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:46.996 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:47.037 [WARNING][4445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:47.037 [INFO][4445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:47.051 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:47.080179 env[1303]: 2025-05-17 00:40:47.066 [INFO][4408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:47.084000 audit: BPF prog-id=23 op=UNLOAD May 17 00:40:47.106033 systemd[1]: run-netns-cni\x2d56a52b0f\x2d105b\x2d0d41\x2d5068\x2d3e9085619741.mount: Deactivated successfully. May 17 00:40:47.117492 env[1303]: time="2025-05-17T00:40:47.117436405Z" level=info msg="TearDown network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" successfully" May 17 00:40:47.117712 env[1303]: time="2025-05-17T00:40:47.117662406Z" level=info msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" returns successfully" May 17 00:40:47.118511 env[1303]: time="2025-05-17T00:40:47.118485779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfbcfbd6f-78x8j,Uid:58c33d02-5e7f-4996-a72a-2b2ef65a5742,Namespace:calico-apiserver,Attempt:1,}" May 17 00:40:47.163232 systemd-networkd[1079]: cali0f627461a63: Gained IPv6LL May 17 00:40:47.420294 kubelet[2118]: E0517 00:40:47.414311 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:47.446000 audit[4548]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.446000 audit[4548]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffec7c9ba20 a2=0 a3=7ffec7c9ba0c items=0 ppid=3837 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.446000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.517000 audit[4550]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.517000 audit[4550]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe7f19f640 a2=0 a3=7ffe7f19f62c items=0 ppid=3837 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.517000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.532236 systemd-networkd[1079]: calif380dc8f745: Link UP May 17 00:40:47.536552 systemd-networkd[1079]: calif380dc8f745: Gained carrier May 17 00:40:47.537134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif380dc8f745: link becomes ready May 17 00:40:47.536000 audit[4547]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.536000 audit[4547]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe334ea0b0 a2=0 a3=7ffe334ea09c items=0 ppid=3837 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.536000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.560000 audit[4542]: NETFILTER_CFG table=filter:108 family=2 entries=233 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.560000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=136592 a0=3 a1=7ffcb26e5600 a2=0 a3=0 items=0 ppid=3837 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.560000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.202 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t72dz-eth0 csi-node-driver- calico-system df42f368-756b-4bd3-8365-0200df6a0484 1049 0 2025-05-17 00:40:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t72dz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif380dc8f745 [] [] }} ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.202 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.335 [INFO][4507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" HandleID="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.335 [INFO][4507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" HandleID="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t72dz", "timestamp":"2025-05-17 00:40:47.33551168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.335 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.335 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.335 [INFO][4507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.392 [INFO][4507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.422 [INFO][4507] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.452 [INFO][4507] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.457 [INFO][4507] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.462 [INFO][4507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.463 [INFO][4507] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.470 [INFO][4507] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445 May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.497 [INFO][4507] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.523 [INFO][4507] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.523 [INFO][4507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" host="localhost" May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.523 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:47.576658 env[1303]: 2025-05-17 00:40:47.523 [INFO][4507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" HandleID="k8s-pod-network.010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.529 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t72dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df42f368-756b-4bd3-8365-0200df6a0484", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t72dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif380dc8f745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.529 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.529 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif380dc8f745 ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.533 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.534 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t72dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df42f368-756b-4bd3-8365-0200df6a0484", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445", Pod:"csi-node-driver-t72dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif380dc8f745", MAC:"9e:f8:3c:c4:60:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.578976 env[1303]: 2025-05-17 00:40:47.572 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445" Namespace="calico-system" Pod="csi-node-driver-t72dz" WorkloadEndpoint="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:47.644525 env[1303]: time="2025-05-17T00:40:47.644161713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:47.645003 env[1303]: time="2025-05-17T00:40:47.644742822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:47.645003 env[1303]: time="2025-05-17T00:40:47.644765486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:47.645003 env[1303]: time="2025-05-17T00:40:47.644936322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445 pid=4578 runtime=io.containerd.runc.v2 May 17 00:40:47.720000 audit[4601]: NETFILTER_CFG table=filter:109 family=2 entries=90 op=nft_register_chain pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.720000 audit[4601]: SYSCALL arch=c000003e syscall=46 success=yes exit=48256 a0=3 a1=7ffee6493ce0 a2=0 a3=7ffee6493ccc items=0 ppid=3837 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.720000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.725572 systemd-networkd[1079]: cali286647d9c40: Link UP May 17 00:40:47.730816 systemd-networkd[1079]: cali286647d9c40: Gained carrier May 17 00:40:47.731144 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali286647d9c40: link becomes ready May 17 00:40:47.767684 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.317 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0 coredns-7c65d6cfc9- kube-system e17ba7d8-0fd1-43ad-968d-26d53c711122 1051 0 2025-05-17 00:39:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-2z6nm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali286647d9c40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.318 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.403 [INFO][4522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" HandleID="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.403 [INFO][4522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" HandleID="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-2z6nm", "timestamp":"2025-05-17 00:40:47.403099306 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.403 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.526 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.526 [INFO][4522] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.553 [INFO][4522] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.582 [INFO][4522] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.616 [INFO][4522] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.632 [INFO][4522] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.648 [INFO][4522] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.648 [INFO][4522] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.671 [INFO][4522] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2 May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.681 [INFO][4522] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.705 [INFO][4522] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.705 [INFO][4522] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" host="localhost" May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.705 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:47.768835 env[1303]: 2025-05-17 00:40:47.705 [INFO][4522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" HandleID="k8s-pod-network.cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.716 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e17ba7d8-0fd1-43ad-968d-26d53c711122", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-2z6nm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali286647d9c40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.716 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.717 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali286647d9c40 ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.731 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.731 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e17ba7d8-0fd1-43ad-968d-26d53c711122", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2", Pod:"coredns-7c65d6cfc9-2z6nm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali286647d9c40", MAC:"fe:fb:aa:84:ba:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.770914 env[1303]: 2025-05-17 00:40:47.756 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2z6nm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:47.882541 env[1303]: time="2025-05-17T00:40:47.882260912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:47.882732 env[1303]: time="2025-05-17T00:40:47.882672187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:47.885836 env[1303]: time="2025-05-17T00:40:47.885783067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:47.886666 env[1303]: time="2025-05-17T00:40:47.886605207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2 pid=4631 runtime=io.containerd.runc.v2 May 17 00:40:47.900160 systemd-networkd[1079]: cali12fd962bfb2: Link UP May 17 00:40:47.910747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:40:47.911627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali12fd962bfb2: link becomes ready May 17 00:40:47.911797 env[1303]: time="2025-05-17T00:40:47.910454186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t72dz,Uid:df42f368-756b-4bd3-8365-0200df6a0484,Namespace:calico-system,Attempt:1,} returns sandbox id \"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445\"" May 17 00:40:47.911466 systemd-networkd[1079]: cali12fd962bfb2: Gained carrier May 17 00:40:47.896000 audit[4632]: NETFILTER_CFG table=filter:110 family=2 entries=48 op=nft_register_chain pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.896000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=22688 a0=3 a1=7ffe88b984c0 a2=0 a3=7ffe88b984ac items=0 ppid=3837 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.896000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.321 [INFO][4490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0 calico-apiserver-7dfbcfbd6f- calico-apiserver 58c33d02-5e7f-4996-a72a-2b2ef65a5742 1050 0 2025-05-17 00:40:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dfbcfbd6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dfbcfbd6f-78x8j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12fd962bfb2 [] [] }} ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.322 [INFO][4490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.443 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" HandleID="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.443 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" HandleID="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dfbcfbd6f-78x8j", "timestamp":"2025-05-17 00:40:47.443461285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.444 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.706 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.706 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.736 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.768 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.781 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.786 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.805 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.805 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.828 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67 May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.856 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.890 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.890 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" host="localhost" May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.890 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:47.966670 env[1303]: 2025-05-17 00:40:47.891 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" HandleID="k8s-pod-network.ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.897 [INFO][4490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0", GenerateName:"calico-apiserver-7dfbcfbd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c33d02-5e7f-4996-a72a-2b2ef65a5742", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfbcfbd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dfbcfbd6f-78x8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12fd962bfb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.897 [INFO][4490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.897 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12fd962bfb2 ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.913 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.915 [INFO][4490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0", GenerateName:"calico-apiserver-7dfbcfbd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c33d02-5e7f-4996-a72a-2b2ef65a5742", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfbcfbd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67", Pod:"calico-apiserver-7dfbcfbd6f-78x8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12fd962bfb2", MAC:"22:54:56:08:e7:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:47.967621 env[1303]: 2025-05-17 00:40:47.963 [INFO][4490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67" Namespace="calico-apiserver" Pod="calico-apiserver-7dfbcfbd6f-78x8j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:47.981381 systemd-networkd[1079]: vxlan.calico: Gained IPv6LL May 17 00:40:47.981000 audit[4659]: NETFILTER_CFG table=filter:111 family=2 entries=63 op=nft_register_chain pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:40:47.981000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=30648 a0=3 a1=7fffc92f19f0 a2=0 a3=7fffc92f19dc items=0 ppid=3837 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:47.981000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:40:48.009619 env[1303]: time="2025-05-17T00:40:48.008680921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:48.009619 env[1303]: time="2025-05-17T00:40:48.008744282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:48.009619 env[1303]: time="2025-05-17T00:40:48.008758740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:48.009619 env[1303]: time="2025-05-17T00:40:48.008928663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67 pid=4673 runtime=io.containerd.runc.v2 May 17 00:40:48.010989 systemd[1]: run-containerd-runc-k8s.io-cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2-runc.ZWZ96P.mount: Deactivated successfully. May 17 00:40:48.054313 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:48.108451 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:48.112211 env[1303]: time="2025-05-17T00:40:48.112091960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2z6nm,Uid:e17ba7d8-0fd1-43ad-968d-26d53c711122,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2\"" May 17 00:40:48.117695 kubelet[2118]: E0517 00:40:48.116544 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:48.120808 env[1303]: time="2025-05-17T00:40:48.120769772Z" level=info msg="CreateContainer within sandbox \"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:40:48.188404 env[1303]: time="2025-05-17T00:40:48.188354591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dfbcfbd6f-78x8j,Uid:58c33d02-5e7f-4996-a72a-2b2ef65a5742,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67\"" May 17 00:40:48.236038 systemd-networkd[1079]: cali5090752f7db: Gained IPv6LL May 17 00:40:48.367654 env[1303]: time="2025-05-17T00:40:48.367581328Z" level=info msg="CreateContainer within sandbox \"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bef4f0aa8b5ed0085dfca9a95b9233b07c1b6acf17e8a1c41f3ee887e0f7a384\"" May 17 00:40:48.374785 env[1303]: time="2025-05-17T00:40:48.374723530Z" level=info msg="StartContainer for \"bef4f0aa8b5ed0085dfca9a95b9233b07c1b6acf17e8a1c41f3ee887e0f7a384\"" May 17 00:40:48.422837 kubelet[2118]: E0517 00:40:48.422405 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:48.500787 env[1303]: time="2025-05-17T00:40:48.500388605Z" level=info msg="StartContainer for \"bef4f0aa8b5ed0085dfca9a95b9233b07c1b6acf17e8a1c41f3ee887e0f7a384\" returns successfully" May 17 00:40:49.453831 systemd-networkd[1079]: calif380dc8f745: Gained IPv6LL May 17 00:40:49.515304 systemd-networkd[1079]: cali286647d9c40: Gained IPv6LL May 17 00:40:49.518558 kubelet[2118]: E0517 00:40:49.517600 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:49.718949 kubelet[2118]: I0517 00:40:49.718497 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2z6nm" podStartSLOduration=52.718474549 podStartE2EDuration="52.718474549s" podCreationTimestamp="2025-05-17 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:49.627124699 +0000 UTC m=+59.225572501" watchObservedRunningTime="2025-05-17 00:40:49.718474549 +0000 UTC m=+59.316922351" May 17 00:40:49.795701 kernel: kauditd_printk_skb: 537 callbacks suppressed May 17 00:40:49.795910 kernel: audit: type=1325 audit(1747442449.788:420): table=filter:112 family=2 entries=14 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.788000 audit[4759]: NETFILTER_CFG table=filter:112 family=2 entries=14 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.788000 audit[4759]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb9cf54f0 a2=0 a3=7ffeb9cf54dc items=0 ppid=2242 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.812788 kernel: audit: type=1300 audit(1747442449.788:420): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb9cf54f0 a2=0 a3=7ffeb9cf54dc items=0 ppid=2242 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.812890 kernel: audit: type=1327 audit(1747442449.788:420): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.805000 audit[4759]: NETFILTER_CFG table=nat:113 family=2 entries=44 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.820778 kernel: audit: type=1325 audit(1747442449.805:421): table=nat:113 family=2 entries=44 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.820916 kernel: audit: type=1300 audit(1747442449.805:421): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb9cf54f0 a2=0 a3=7ffeb9cf54dc items=0 ppid=2242 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.805000 audit[4759]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb9cf54f0 a2=0 a3=7ffeb9cf54dc items=0 ppid=2242 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.830132 kernel: audit: type=1327 audit(1747442449.805:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.841971 systemd-networkd[1079]: cali12fd962bfb2: Gained IPv6LL May 17 00:40:49.867000 audit[4761]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.867000 audit[4761]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc78b3d070 a2=0 a3=7ffc78b3d05c items=0 ppid=2242 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.882168 kernel: audit: type=1325 audit(1747442449.867:422): table=filter:114 family=2 entries=14 op=nft_register_rule pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.882307 kernel: audit: type=1300 audit(1747442449.867:422): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc78b3d070 a2=0 a3=7ffc78b3d05c items=0 ppid=2242 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.882335 kernel: audit: type=1327 audit(1747442449.867:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.901000 audit[4761]: NETFILTER_CFG table=nat:115 family=2 entries=56 op=nft_register_chain pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:49.901000 audit[4761]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc78b3d070 a2=0 a3=7ffc78b3d05c items=0 ppid=2242 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:49.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:49.914761 kernel: audit: type=1325 audit(1747442449.901:423): table=nat:115 family=2 entries=56 op=nft_register_chain pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:50.226855 env[1303]: time="2025-05-17T00:40:50.226808996Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:50.241756 env[1303]: time="2025-05-17T00:40:50.241685158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:50.249944 env[1303]: time="2025-05-17T00:40:50.249866492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:50.257791 env[1303]: time="2025-05-17T00:40:50.252894267Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:50.257791 env[1303]: time="2025-05-17T00:40:50.253261146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:40:50.257791 env[1303]: time="2025-05-17T00:40:50.255883978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:40:50.267549 env[1303]: time="2025-05-17T00:40:50.267501516Z" level=info msg="CreateContainer within sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:40:50.316040 env[1303]: time="2025-05-17T00:40:50.315929388Z" level=info msg="CreateContainer within sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\"" May 17 00:40:50.318627 env[1303]: time="2025-05-17T00:40:50.318590994Z" level=info msg="StartContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\"" May 17 00:40:50.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.136:22-10.0.0.1:50812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:50.389695 systemd[1]: Started sshd@9-10.0.0.136:22-10.0.0.1:50812.service. May 17 00:40:50.471358 env[1303]: time="2025-05-17T00:40:50.471031180Z" level=info msg="StartContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" returns successfully" May 17 00:40:50.495730 env[1303]: time="2025-05-17T00:40:50.495590260Z" level=info msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" May 17 00:40:50.508135 sshd[4786]: Accepted publickey for core from 10.0.0.1 port 50812 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:50.506000 audit[4786]: USER_ACCT pid=4786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.509000 audit[4786]: CRED_ACQ pid=4786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.509000 audit[4786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8d253e40 a2=3 a3=0 items=0 ppid=1 pid=4786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:50.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:40:50.511366 sshd[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:50.526634 systemd[1]: Started session-10.scope. May 17 00:40:50.528139 systemd-logind[1293]: New session 10 of user core. May 17 00:40:50.539000 audit[4786]: USER_START pid=4786 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.544320 kubelet[2118]: E0517 00:40:50.544288 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:50.542000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.611383 kubelet[2118]: I0517 00:40:50.606202 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d67ffffc-hkhhx" podStartSLOduration=36.792996815 podStartE2EDuration="41.606174347s" podCreationTimestamp="2025-05-17 00:40:09 +0000 UTC" firstStartedPulling="2025-05-17 00:40:45.441732308 +0000 UTC m=+55.040180110" lastFinishedPulling="2025-05-17 00:40:50.25490984 +0000 UTC m=+59.853357642" observedRunningTime="2025-05-17 00:40:50.596218259 +0000 UTC m=+60.194666091" watchObservedRunningTime="2025-05-17 00:40:50.606174347 +0000 UTC m=+60.204622149" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.623 [WARNING][4811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0", GenerateName:"calico-kube-controllers-ddd9bb8f8-", Namespace:"calico-system", SelfLink:"", UID:"b53473ef-a278-4a28-a57c-3f8a53828a27", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd9bb8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb", Pod:"calico-kube-controllers-ddd9bb8f8-zxssx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab9e590488a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.623 [INFO][4811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.623 [INFO][4811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" iface="eth0" netns="" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.623 [INFO][4811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.623 [INFO][4811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.692 [INFO][4835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.692 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.693 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.705 [WARNING][4835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.705 [INFO][4835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.713 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:50.719268 env[1303]: 2025-05-17 00:40:50.716 [INFO][4811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.720013 env[1303]: time="2025-05-17T00:40:50.719960271Z" level=info msg="TearDown network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" successfully" May 17 00:40:50.720160 env[1303]: time="2025-05-17T00:40:50.720098054Z" level=info msg="StopPodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" returns successfully" May 17 00:40:50.730145 env[1303]: time="2025-05-17T00:40:50.725427047Z" level=info msg="RemovePodSandbox for \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" May 17 00:40:50.730145 env[1303]: time="2025-05-17T00:40:50.725478445Z" level=info msg="Forcibly stopping sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\"" May 17 00:40:50.855454 sshd[4786]: pam_unix(sshd:session): session closed for user core May 17 00:40:50.855000 audit[4786]: USER_END pid=4786 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.856000 audit[4786]: CRED_DISP pid=4786 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:50.863749 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. May 17 00:40:50.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.136:22-10.0.0.1:50812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:50.865422 systemd[1]: sshd@9-10.0.0.136:22-10.0.0.1:50812.service: Deactivated successfully. May 17 00:40:50.866453 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:40:50.868433 systemd-logind[1293]: Removed session 10. May 17 00:40:50.940000 audit[4873]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:50.940000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd27cb39a0 a2=0 a3=7ffd27cb398c items=0 ppid=2242 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:50.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:50.951000 audit[4873]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:50.951000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd27cb39a0 a2=0 a3=7ffd27cb398c items=0 ppid=2242 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:50.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.853 [WARNING][4856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0", GenerateName:"calico-kube-controllers-ddd9bb8f8-", Namespace:"calico-system", SelfLink:"", UID:"b53473ef-a278-4a28-a57c-3f8a53828a27", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd9bb8f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb", Pod:"calico-kube-controllers-ddd9bb8f8-zxssx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab9e590488a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.854 [INFO][4856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.854 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" iface="eth0" netns="" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.854 [INFO][4856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.854 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.945 [INFO][4864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.945 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.945 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.957 [WARNING][4864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.957 [INFO][4864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" HandleID="k8s-pod-network.9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" Workload="localhost-k8s-calico--kube--controllers--ddd9bb8f8--zxssx-eth0" May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.960 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:50.966818 env[1303]: 2025-05-17 00:40:50.962 [INFO][4856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076" May 17 00:40:50.967380 env[1303]: time="2025-05-17T00:40:50.966892694Z" level=info msg="TearDown network for sandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" successfully" May 17 00:40:50.985067 env[1303]: time="2025-05-17T00:40:50.984930045Z" level=info msg="RemovePodSandbox \"9b7f5d4ea482d6eb56c63148efe8deb94574ce6495b55cfc3d9275467502e076\" returns successfully" May 17 00:40:50.986596 env[1303]: time="2025-05-17T00:40:50.986547300Z" level=info msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.082 [WARNING][4886] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e17ba7d8-0fd1-43ad-968d-26d53c711122", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2", Pod:"coredns-7c65d6cfc9-2z6nm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali286647d9c40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.082 [INFO][4886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.082 [INFO][4886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" iface="eth0" netns="" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.082 [INFO][4886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.082 [INFO][4886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.140 [INFO][4896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.140 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.140 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.159 [WARNING][4896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.159 [INFO][4896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.161 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:51.169262 env[1303]: 2025-05-17 00:40:51.165 [INFO][4886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.169262 env[1303]: time="2025-05-17T00:40:51.167623184Z" level=info msg="TearDown network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" successfully" May 17 00:40:51.169262 env[1303]: time="2025-05-17T00:40:51.167662108Z" level=info msg="StopPodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" returns successfully" May 17 00:40:51.169262 env[1303]: time="2025-05-17T00:40:51.168250650Z" level=info msg="RemovePodSandbox for \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" May 17 00:40:51.169262 env[1303]: time="2025-05-17T00:40:51.168283061Z" level=info msg="Forcibly stopping sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\"" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.266 [WARNING][4912] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e17ba7d8-0fd1-43ad-968d-26d53c711122", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd35e0373bedea2c813d696b1f3486cb058ee96c1e4c3b3d549d7e27533b68f2", Pod:"coredns-7c65d6cfc9-2z6nm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali286647d9c40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.266 [INFO][4912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.266 [INFO][4912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" iface="eth0" netns="" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.266 [INFO][4912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.266 [INFO][4912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.318 [INFO][4919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.318 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.318 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.327 [WARNING][4919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.327 [INFO][4919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" HandleID="k8s-pod-network.c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" Workload="localhost-k8s-coredns--7c65d6cfc9--2z6nm-eth0" May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.332 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:51.349367 env[1303]: 2025-05-17 00:40:51.337 [INFO][4912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7" May 17 00:40:51.349367 env[1303]: time="2025-05-17T00:40:51.349041118Z" level=info msg="TearDown network for sandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" successfully" May 17 00:40:51.363159 env[1303]: time="2025-05-17T00:40:51.362986285Z" level=info msg="RemovePodSandbox \"c3873b08782365d44b4d5f07ef54422b01eda9e6a338e322f9489a568fe411d7\" returns successfully" May 17 00:40:51.366161 env[1303]: time="2025-05-17T00:40:51.363702902Z" level=info msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.447 [WARNING][4936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t72dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df42f368-756b-4bd3-8365-0200df6a0484", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445", Pod:"csi-node-driver-t72dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif380dc8f745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.448 [INFO][4936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.448 [INFO][4936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" iface="eth0" netns="" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.448 [INFO][4936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.448 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.508 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.508 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.508 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.515 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.515 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.518 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:51.528945 env[1303]: 2025-05-17 00:40:51.526 [INFO][4936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.528945 env[1303]: time="2025-05-17T00:40:51.528291221Z" level=info msg="TearDown network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" successfully" May 17 00:40:51.528945 env[1303]: time="2025-05-17T00:40:51.528333151Z" level=info msg="StopPodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" returns successfully" May 17 00:40:51.529853 env[1303]: time="2025-05-17T00:40:51.529824103Z" level=info msg="RemovePodSandbox for \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" May 17 00:40:51.529985 env[1303]: time="2025-05-17T00:40:51.529934523Z" level=info msg="Forcibly stopping sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\"" May 17 00:40:51.556367 kubelet[2118]: E0517 00:40:51.554469 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.766 [WARNING][4962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t72dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"df42f368-756b-4bd3-8365-0200df6a0484", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445", Pod:"csi-node-driver-t72dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif380dc8f745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.766 [INFO][4962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.766 [INFO][4962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" iface="eth0" netns="" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.766 [INFO][4962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.766 [INFO][4962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.809 [INFO][4970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.809 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.809 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.825 [WARNING][4970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.825 [INFO][4970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" HandleID="k8s-pod-network.73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" Workload="localhost-k8s-csi--node--driver--t72dz-eth0" May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.847 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:51.857228 env[1303]: 2025-05-17 00:40:51.850 [INFO][4962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b" May 17 00:40:51.858033 env[1303]: time="2025-05-17T00:40:51.857241278Z" level=info msg="TearDown network for sandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" successfully" May 17 00:40:51.889499 env[1303]: time="2025-05-17T00:40:51.889416520Z" level=info msg="RemovePodSandbox \"73eddf498c16e7cbe04ffdd013f8998f3777033a702dbd648fda69cc93021e9b\" returns successfully" May 17 00:40:51.890400 env[1303]: time="2025-05-17T00:40:51.890354989Z" level=info msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.113 [WARNING][4986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"11364172-e58d-4414-85d0-e2ab3fb5d624", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863", Pod:"calico-apiserver-5d67ffffc-lwqn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126211cea7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.114 [INFO][4986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.114 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" iface="eth0" netns="" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.114 [INFO][4986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.114 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.175 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.179 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.179 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.201 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.201 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.207 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:52.213034 env[1303]: 2025-05-17 00:40:52.209 [INFO][4986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.213034 env[1303]: time="2025-05-17T00:40:52.212218974Z" level=info msg="TearDown network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" successfully" May 17 00:40:52.213034 env[1303]: time="2025-05-17T00:40:52.212257768Z" level=info msg="StopPodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" returns successfully" May 17 00:40:52.247081 env[1303]: time="2025-05-17T00:40:52.214178249Z" level=info msg="RemovePodSandbox for \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" May 17 00:40:52.247081 env[1303]: time="2025-05-17T00:40:52.227031505Z" level=info msg="Forcibly stopping sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\"" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.356 [WARNING][5014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"11364172-e58d-4414-85d0-e2ab3fb5d624", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863", Pod:"calico-apiserver-5d67ffffc-lwqn5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali126211cea7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.357 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.357 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" iface="eth0" netns="" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.357 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.357 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.417 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.418 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.418 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.432 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.432 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" HandleID="k8s-pod-network.1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.434 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:52.450006 env[1303]: 2025-05-17 00:40:52.443 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1" May 17 00:40:52.450006 env[1303]: time="2025-05-17T00:40:52.448495378Z" level=info msg="TearDown network for sandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" successfully" May 17 00:40:52.584254 env[1303]: time="2025-05-17T00:40:52.584174480Z" level=info msg="RemovePodSandbox \"1b252ad7316ee019fbb2b1b171360e693531ec49ac0f14788d1d3ec47bd658c1\" returns successfully" May 17 00:40:52.591814 env[1303]: time="2025-05-17T00:40:52.591225157Z" level=info msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.810 [WARNING][5041] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a553309-56ea-422b-ad2d-4882053f4a1c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082", Pod:"coredns-7c65d6cfc9-qd4gz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf0b415252c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.810 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.810 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" iface="eth0" netns="" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.810 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.810 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.848 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.848 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.848 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.856 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.857 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.860 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:52.864414 env[1303]: 2025-05-17 00:40:52.862 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.864414 env[1303]: time="2025-05-17T00:40:52.864218938Z" level=info msg="TearDown network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" successfully" May 17 00:40:52.864414 env[1303]: time="2025-05-17T00:40:52.864262001Z" level=info msg="StopPodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" returns successfully" May 17 00:40:52.865371 env[1303]: time="2025-05-17T00:40:52.864755320Z" level=info msg="RemovePodSandbox for \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" May 17 00:40:52.865371 env[1303]: time="2025-05-17T00:40:52.864800277Z" level=info msg="Forcibly stopping sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\"" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.918 [WARNING][5071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1a553309-56ea-422b-ad2d-4882053f4a1c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 39, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc468fc13fe6003d742638783ba373c96154874f5faa63449bb0d1f8142ba082", Pod:"coredns-7c65d6cfc9-qd4gz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf0b415252c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.918 [INFO][5071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.919 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" iface="eth0" netns="" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.919 [INFO][5071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.919 [INFO][5071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.949 [INFO][5079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.949 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.949 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.957 [WARNING][5079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.957 [INFO][5079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" HandleID="k8s-pod-network.914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" Workload="localhost-k8s-coredns--7c65d6cfc9--qd4gz-eth0" May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.959 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:52.963945 env[1303]: 2025-05-17 00:40:52.961 [INFO][5071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47" May 17 00:40:52.964710 env[1303]: time="2025-05-17T00:40:52.964640526Z" level=info msg="TearDown network for sandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" successfully" May 17 00:40:53.047125 env[1303]: time="2025-05-17T00:40:53.047033082Z" level=info msg="RemovePodSandbox \"914b681bb8c3ee9e7d4b71eee76e8401c4f44281c185f8ee8b8c7e13a574ef47\" returns successfully" May 17 00:40:53.067976 env[1303]: time="2025-05-17T00:40:53.047870277Z" level=info msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" May 17 00:40:53.097000 audit[5098]: NETFILTER_CFG table=filter:118 family=2 entries=13 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:53.097000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffce8f55620 a2=0 a3=7ffce8f5560c items=0 ppid=2242 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:53.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:53.109000 audit[5098]: NETFILTER_CFG table=nat:119 family=2 entries=27 op=nft_register_chain pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:53.109000 audit[5098]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffce8f55620 a2=0 a3=7ffce8f5560c items=0 ppid=2242 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:53.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.243 [WARNING][5101] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" WorkloadEndpoint="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.243 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.243 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" iface="eth0" netns="" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.243 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.244 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.399 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.399 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.399 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.426 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.426 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.431 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:53.439849 env[1303]: 2025-05-17 00:40:53.433 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.439849 env[1303]: time="2025-05-17T00:40:53.439415480Z" level=info msg="TearDown network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" successfully" May 17 00:40:53.439849 env[1303]: time="2025-05-17T00:40:53.439454574Z" level=info msg="StopPodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" returns successfully" May 17 00:40:53.443475 env[1303]: time="2025-05-17T00:40:53.441565606Z" level=info msg="RemovePodSandbox for \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" May 17 00:40:53.443475 env[1303]: time="2025-05-17T00:40:53.441601074Z" level=info msg="Forcibly stopping sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\"" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.531 [WARNING][5128] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" WorkloadEndpoint="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.531 [INFO][5128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.532 [INFO][5128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" iface="eth0" netns="" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.532 [INFO][5128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.532 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.567 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.567 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.567 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.577 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.577 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" HandleID="k8s-pod-network.46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" Workload="localhost-k8s-whisker--67d7f797b5--wggl9-eth0" May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.580 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:53.583980 env[1303]: 2025-05-17 00:40:53.582 [INFO][5128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7" May 17 00:40:53.584854 env[1303]: time="2025-05-17T00:40:53.584031499Z" level=info msg="TearDown network for sandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" successfully" May 17 00:40:53.800231 env[1303]: time="2025-05-17T00:40:53.799032233Z" level=info msg="RemovePodSandbox \"46fec21370e747b041da0447fbd93709a3f0419832ab82235ed0bafbcabf2fb7\" returns successfully" May 17 00:40:53.800674 env[1303]: time="2025-05-17T00:40:53.800634004Z" level=info msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.883 [WARNING][5153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4c9eee5c-cc38-4032-8b07-e8d97094a990", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e", Pod:"goldmane-8f77d7b6c-6mszs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5090752f7db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.884 [INFO][5153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.884 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" iface="eth0" netns="" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.884 [INFO][5153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.884 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.953 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.953 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.953 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.974 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.974 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.979 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:53.989937 env[1303]: 2025-05-17 00:40:53.988 [INFO][5153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:53.990573 env[1303]: time="2025-05-17T00:40:53.990519379Z" level=info msg="TearDown network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" successfully" May 17 00:40:53.990573 env[1303]: time="2025-05-17T00:40:53.990560788Z" level=info msg="StopPodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" returns successfully" May 17 00:40:53.995411 env[1303]: time="2025-05-17T00:40:53.991703946Z" level=info msg="RemovePodSandbox for \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" May 17 00:40:53.995411 env[1303]: time="2025-05-17T00:40:53.994900536Z" level=info msg="Forcibly stopping sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\"" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.062 [WARNING][5178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"4c9eee5c-cc38-4032-8b07-e8d97094a990", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9436315eae13d17442811c0cab3d6209b482b66b1bfd8b3dfbc05b8f2f7e403e", Pod:"goldmane-8f77d7b6c-6mszs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5090752f7db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.062 [INFO][5178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.062 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" iface="eth0" netns="" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.062 [INFO][5178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.062 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.206 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.206 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.206 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.263 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.263 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" HandleID="k8s-pod-network.f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" Workload="localhost-k8s-goldmane--8f77d7b6c--6mszs-eth0" May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.266 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:54.308632 env[1303]: 2025-05-17 00:40:54.275 [INFO][5178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db" May 17 00:40:54.308632 env[1303]: time="2025-05-17T00:40:54.303327261Z" level=info msg="TearDown network for sandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" successfully" May 17 00:40:54.377749 env[1303]: time="2025-05-17T00:40:54.377651134Z" level=info msg="RemovePodSandbox \"f77c9ed07a8e1408655cadd8cb1ac54a39b3b3f8a3f4540a74265463465de9db\" returns successfully" May 17 00:40:54.378334 env[1303]: time="2025-05-17T00:40:54.378291533Z" level=info msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.496 [WARNING][5204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"272cb491-0dd7-4c17-9835-83fe9d59eb06", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d", Pod:"calico-apiserver-5d67ffffc-hkhhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcd07a666c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.497 [INFO][5204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.497 [INFO][5204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" iface="eth0" netns="" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.497 [INFO][5204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.497 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.545 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.545 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.545 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.552 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.553 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.556 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:54.562240 env[1303]: 2025-05-17 00:40:54.559 [INFO][5204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.562240 env[1303]: time="2025-05-17T00:40:54.562182421Z" level=info msg="TearDown network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" successfully" May 17 00:40:54.562240 env[1303]: time="2025-05-17T00:40:54.562227246Z" level=info msg="StopPodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" returns successfully" May 17 00:40:54.563615 env[1303]: time="2025-05-17T00:40:54.563541339Z" level=info msg="RemovePodSandbox for \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" May 17 00:40:54.563615 env[1303]: time="2025-05-17T00:40:54.563594941Z" level=info msg="Forcibly stopping sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\"" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.831 [WARNING][5231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0", GenerateName:"calico-apiserver-5d67ffffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"272cb491-0dd7-4c17-9835-83fe9d59eb06", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d67ffffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d", Pod:"calico-apiserver-5d67ffffc-hkhhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcd07a666c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.831 [INFO][5231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.831 [INFO][5231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" iface="eth0" netns="" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.831 [INFO][5231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.831 [INFO][5231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.854 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.854 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.854 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.862 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.862 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" HandleID="k8s-pod-network.675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.866 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:54.871299 env[1303]: 2025-05-17 00:40:54.868 [INFO][5231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e" May 17 00:40:54.872525 env[1303]: time="2025-05-17T00:40:54.872434168Z" level=info msg="TearDown network for sandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" successfully" May 17 00:40:54.952226 env[1303]: time="2025-05-17T00:40:54.951590023Z" level=info msg="RemovePodSandbox \"675c7955f36922ca93ab8b796229ca3c15d49f1500805183f4487e55de72783e\" returns successfully" May 17 00:40:54.952612 env[1303]: time="2025-05-17T00:40:54.952493163Z" level=info msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" May 17 00:40:54.978980 env[1303]: time="2025-05-17T00:40:54.978915455Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:55.067549 env[1303]: time="2025-05-17T00:40:55.067499736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:54.995 [WARNING][5257] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0", GenerateName:"calico-apiserver-7dfbcfbd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c33d02-5e7f-4996-a72a-2b2ef65a5742", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfbcfbd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67", Pod:"calico-apiserver-7dfbcfbd6f-78x8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12fd962bfb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:54.995 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:54.995 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" iface="eth0" netns="" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:54.995 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:54.995 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.022 [INFO][5265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.022 [INFO][5265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.022 [INFO][5265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.088 [WARNING][5265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.088 [INFO][5265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.090 [INFO][5265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:55.096647 env[1303]: 2025-05-17 00:40:55.093 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.097238 env[1303]: time="2025-05-17T00:40:55.096678720Z" level=info msg="TearDown network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" successfully" May 17 00:40:55.097238 env[1303]: time="2025-05-17T00:40:55.096731461Z" level=info msg="StopPodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" returns successfully" May 17 00:40:55.097454 env[1303]: time="2025-05-17T00:40:55.097416465Z" level=info msg="RemovePodSandbox for \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" May 17 00:40:55.097504 env[1303]: time="2025-05-17T00:40:55.097463185Z" level=info msg="Forcibly stopping sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\"" May 17 00:40:55.123572 env[1303]: time="2025-05-17T00:40:55.122667702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.137 [WARNING][5284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0", GenerateName:"calico-apiserver-7dfbcfbd6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c33d02-5e7f-4996-a72a-2b2ef65a5742", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 40, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dfbcfbd6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67", Pod:"calico-apiserver-7dfbcfbd6f-78x8j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12fd962bfb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.137 [INFO][5284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.137 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" iface="eth0" netns="" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.137 [INFO][5284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.137 [INFO][5284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.161 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.161 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.161 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.167 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.167 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" HandleID="k8s-pod-network.75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" Workload="localhost-k8s-calico--apiserver--7dfbcfbd6f--78x8j-eth0" May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.169 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:40:55.173910 env[1303]: 2025-05-17 00:40:55.171 [INFO][5284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2" May 17 00:40:55.174424 env[1303]: time="2025-05-17T00:40:55.173956030Z" level=info msg="TearDown network for sandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" successfully" May 17 00:40:55.208794 env[1303]: time="2025-05-17T00:40:55.208657940Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:55.209505 env[1303]: time="2025-05-17T00:40:55.209467602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:40:55.211483 env[1303]: time="2025-05-17T00:40:55.211441170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:40:55.218550 env[1303]: time="2025-05-17T00:40:55.218496552Z" level=info msg="CreateContainer within sandbox \"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:40:55.447925 env[1303]: time="2025-05-17T00:40:55.447756335Z" level=info msg="RemovePodSandbox \"75d4bde9e5360f88cc5a1c4575091e3169ce0d7aa8e5c6022c25227cede0b7e2\" returns successfully" May 17 00:40:55.753791 env[1303]: time="2025-05-17T00:40:55.753623552Z" level=info msg="CreateContainer within sandbox \"c212ec0c49b3979802c7b3d82538a40fdbc6f24a3a2fe6241106ea88879067bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d27bc27d0f6a761f7e48def00b303ccb311a2db9bbeeb7a7a78e34c0c9e4d39f\"" May 17 00:40:55.754408 env[1303]: time="2025-05-17T00:40:55.754351278Z" level=info msg="StartContainer for \"d27bc27d0f6a761f7e48def00b303ccb311a2db9bbeeb7a7a78e34c0c9e4d39f\"" May 17 00:40:55.856595 systemd[1]: Started sshd@10-10.0.0.136:22-10.0.0.1:43466.service. May 17 00:40:55.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:43466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:55.857944 kernel: kauditd_printk_skb: 25 callbacks suppressed May 17 00:40:55.858010 kernel: audit: type=1130 audit(1747442455.855:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:43466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:55.932000 audit[5346]: USER_ACCT pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.934344 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 43466 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:55.936884 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:55.935000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.943372 kernel: audit: type=1101 audit(1747442455.932:438): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.943463 kernel: audit: type=1103 audit(1747442455.935:439): pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.943493 kernel: audit: type=1006 audit(1747442455.935:440): pid=5346 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 17 00:40:55.943131 systemd-logind[1293]: New session 11 of user core. May 17 00:40:55.943994 systemd[1]: Started session-11.scope. May 17 00:40:55.951582 kernel: audit: type=1300 audit(1747442455.935:440): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7fe87cb0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:55.935000 audit[5346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7fe87cb0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:55.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:40:55.953823 kernel: audit: type=1327 audit(1747442455.935:440): proctitle=737368643A20636F7265205B707269765D May 17 00:40:55.953875 kernel: audit: type=1105 audit(1747442455.948:441): pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.948000 audit[5346]: USER_START pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.950000 audit[5349]: CRED_ACQ pid=5349 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.963580 kernel: audit: type=1103 audit(1747442455.950:442): pid=5349 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:55.970293 env[1303]: time="2025-05-17T00:40:55.970227770Z" level=info msg="StartContainer for \"d27bc27d0f6a761f7e48def00b303ccb311a2db9bbeeb7a7a78e34c0c9e4d39f\" returns successfully" May 17 00:40:56.103407 env[1303]: time="2025-05-17T00:40:56.103334193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:56.203996 env[1303]: time="2025-05-17T00:40:56.203931322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:56.213310 sshd[5346]: pam_unix(sshd:session): session closed for user core May 17 00:40:56.213000 audit[5346]: USER_END pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:56.218824 systemd[1]: sshd@10-10.0.0.136:22-10.0.0.1:43466.service: Deactivated successfully. May 17 00:40:56.219918 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:40:56.220532 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. May 17 00:40:56.221562 systemd-logind[1293]: Removed session 11. May 17 00:40:56.225138 kernel: audit: type=1106 audit(1747442456.213:443): pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:56.225673 env[1303]: time="2025-05-17T00:40:56.225631705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:56.213000 audit[5346]: CRED_DISP pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:56.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.136:22-10.0.0.1:43466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:56.235209 kernel: audit: type=1104 audit(1747442456.213:444): pid=5346 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:40:56.246309 env[1303]: time="2025-05-17T00:40:56.246262653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:56.246856 env[1303]: time="2025-05-17T00:40:56.246823129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:40:56.248034 env[1303]: time="2025-05-17T00:40:56.248010018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:40:56.248942 env[1303]: time="2025-05-17T00:40:56.248913097Z" level=info msg="CreateContainer within sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:40:56.478790 env[1303]: time="2025-05-17T00:40:56.478656172Z" level=info msg="CreateContainer within sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\"" May 17 00:40:56.479286 env[1303]: time="2025-05-17T00:40:56.479252748Z" level=info msg="StartContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\"" May 17 00:40:56.536125 env[1303]: time="2025-05-17T00:40:56.536037288Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:40:56.559778 env[1303]: time="2025-05-17T00:40:56.559673284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:40:56.560501 kubelet[2118]: E0517 00:40:56.560272 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:40:56.560501 kubelet[2118]: E0517 00:40:56.560334 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:40:56.561124 env[1303]: time="2025-05-17T00:40:56.561091444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:40:56.561541 env[1303]: time="2025-05-17T00:40:56.561505842Z" level=info msg="StartContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" returns successfully" May 17 00:40:56.569470 kubelet[2118]: E0517 00:40:56.569408 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bfa66715bacc4f5c85e952a4173b2591,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:40:56.919051 env[1303]: time="2025-05-17T00:40:56.918986224Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:40:56.934000 audit[5423]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=5423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:56.934000 audit[5423]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffde07c6230 a2=0 a3=7ffde07c621c items=0 ppid=2242 pid=5423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:56.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:56.939000 audit[5423]: NETFILTER_CFG table=nat:121 family=2 entries=30 op=nft_register_rule pid=5423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:56.939000 audit[5423]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffde07c6230 a2=0 a3=7ffde07c621c items=0 ppid=2242 pid=5423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:56.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:57.008829 env[1303]: time="2025-05-17T00:40:57.008756991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:40:57.009168 kubelet[2118]: E0517 00:40:57.009018 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:40:57.009168 kubelet[2118]: E0517 00:40:57.009075 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:40:57.009425 kubelet[2118]: E0517 00:40:57.009341 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fb95b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-6mszs_calico-system(4c9eee5c-cc38-4032-8b07-e8d97094a990): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:40:57.009738 env[1303]: time="2025-05-17T00:40:57.009691029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:40:57.010829 kubelet[2118]: E0517 00:40:57.010794 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:40:57.132455 kubelet[2118]: I0517 00:40:57.132394 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d67ffffc-lwqn5" podStartSLOduration=37.674412989 podStartE2EDuration="48.1323723s" podCreationTimestamp="2025-05-17 00:40:09 +0000 UTC" firstStartedPulling="2025-05-17 00:40:45.789917704 +0000 UTC m=+55.388365516" lastFinishedPulling="2025-05-17 00:40:56.247877025 +0000 UTC m=+65.846324827" observedRunningTime="2025-05-17 00:40:56.776923332 +0000 UTC m=+66.375371124" watchObservedRunningTime="2025-05-17 00:40:57.1323723 +0000 UTC m=+66.730820102" May 17 00:40:57.589049 kubelet[2118]: I0517 00:40:57.589013 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:40:57.589860 kubelet[2118]: E0517 00:40:57.589827 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:40:57.773818 kubelet[2118]: I0517 00:40:57.773580 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ddd9bb8f8-zxssx" podStartSLOduration=36.12660862 podStartE2EDuration="45.773559827s" podCreationTimestamp="2025-05-17 00:40:12 +0000 UTC" firstStartedPulling="2025-05-17 00:40:45.56396879 +0000 UTC m=+55.162416592" lastFinishedPulling="2025-05-17 00:40:55.210919997 +0000 UTC m=+64.809367799" observedRunningTime="2025-05-17 00:40:57.13266094 +0000 UTC m=+66.731108752" watchObservedRunningTime="2025-05-17 00:40:57.773559827 +0000 UTC m=+67.372007659" May 17 00:40:57.786000 audit[5433]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:57.786000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff8d893750 a2=0 a3=7fff8d89373c items=0 ppid=2242 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:57.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:40:57.791000 audit[5433]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:40:57.791000 audit[5433]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff8d893750 a2=0 a3=7fff8d89373c items=0 ppid=2242 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:57.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:00.234904 env[1303]: time="2025-05-17T00:41:00.233829126Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.245679 env[1303]: time="2025-05-17T00:41:00.245625402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.255175 env[1303]: time="2025-05-17T00:41:00.255070628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.259143 env[1303]: time="2025-05-17T00:41:00.258683587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.259269 env[1303]: time="2025-05-17T00:41:00.259096562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:41:00.263119 env[1303]: time="2025-05-17T00:41:00.263066790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:41:00.265584 env[1303]: time="2025-05-17T00:41:00.265551233Z" level=info msg="CreateContainer within sandbox \"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:41:00.326150 env[1303]: time="2025-05-17T00:41:00.320594928Z" level=info msg="CreateContainer within sandbox \"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f8b4feb43628baed511722adc292d619c6231bc78b7ffc40404dff467628ae48\"" May 17 00:41:00.326150 env[1303]: time="2025-05-17T00:41:00.321272095Z" level=info msg="StartContainer for \"f8b4feb43628baed511722adc292d619c6231bc78b7ffc40404dff467628ae48\"" May 17 00:41:00.400718 systemd[1]: run-containerd-runc-k8s.io-f8b4feb43628baed511722adc292d619c6231bc78b7ffc40404dff467628ae48-runc.ex9WEJ.mount: Deactivated successfully. May 17 00:41:00.549679 env[1303]: time="2025-05-17T00:41:00.549362721Z" level=info msg="StartContainer for \"f8b4feb43628baed511722adc292d619c6231bc78b7ffc40404dff467628ae48\" returns successfully" May 17 00:41:00.751819 env[1303]: time="2025-05-17T00:41:00.749301497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.759797 env[1303]: time="2025-05-17T00:41:00.755857361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.763323 env[1303]: time="2025-05-17T00:41:00.762585211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.784771 env[1303]: time="2025-05-17T00:41:00.778141425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:00.784771 env[1303]: time="2025-05-17T00:41:00.779430777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:41:00.800275 env[1303]: time="2025-05-17T00:41:00.797743110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:41:00.800275 env[1303]: time="2025-05-17T00:41:00.799004999Z" level=info msg="CreateContainer within sandbox \"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:41:00.899470 env[1303]: time="2025-05-17T00:41:00.899296485Z" level=info msg="CreateContainer within sandbox \"ee14a95b940b90a00c19f3dc0c13d5f28340b0a8498337d8b6b9994707f69d67\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8044701bc9da5811531ed4dccd91809145446ca0c84c88fbad7afecc261d3fd3\"" May 17 00:41:00.905638 env[1303]: time="2025-05-17T00:41:00.904316689Z" level=info msg="StartContainer for \"8044701bc9da5811531ed4dccd91809145446ca0c84c88fbad7afecc261d3fd3\"" May 17 00:41:01.083469 env[1303]: time="2025-05-17T00:41:01.083415986Z" level=info msg="StartContainer for \"8044701bc9da5811531ed4dccd91809145446ca0c84c88fbad7afecc261d3fd3\" returns successfully" May 17 00:41:01.091065 env[1303]: time="2025-05-17T00:41:01.090971345Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:01.110505 env[1303]: time="2025-05-17T00:41:01.110428867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:01.111490 kubelet[2118]: E0517 00:41:01.111045 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:01.111490 kubelet[2118]: E0517 00:41:01.111119 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:01.111490 kubelet[2118]: E0517 00:41:01.111356 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:01.113654 kubelet[2118]: E0517 00:41:01.113340 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:01.115487 env[1303]: time="2025-05-17T00:41:01.115450080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:41:01.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.136:22-10.0.0.1:43468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:01.227745 systemd[1]: Started sshd@11-10.0.0.136:22-10.0.0.1:43468.service. May 17 00:41:01.232835 kernel: kauditd_printk_skb: 13 callbacks suppressed May 17 00:41:01.232913 kernel: audit: type=1130 audit(1747442461.226:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.136:22-10.0.0.1:43468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:01.355000 audit[5526]: USER_ACCT pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.358965 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 43468 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:01.373080 kernel: audit: type=1101 audit(1747442461.355:451): pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.373218 kernel: audit: type=1103 audit(1747442461.361:452): pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.361000 audit[5526]: CRED_ACQ pid=5526 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.363793 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:01.382728 kernel: audit: type=1006 audit(1747442461.361:453): pid=5526 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 May 17 00:41:01.381138 systemd[1]: Started session-12.scope. May 17 00:41:01.381827 systemd-logind[1293]: New session 12 of user core. May 17 00:41:01.361000 audit[5526]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd875c11e0 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.401931 kernel: audit: type=1300 audit(1747442461.361:453): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd875c11e0 a2=3 a3=0 items=0 ppid=1 pid=5526 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.402005 kernel: audit: type=1327 audit(1747442461.361:453): proctitle=737368643A20636F7265205B707269765D May 17 00:41:01.361000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:01.422376 kernel: audit: type=1105 audit(1747442461.407:454): pid=5526 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.407000 audit[5526]: USER_START pid=5526 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.415000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.430205 kernel: audit: type=1103 audit(1747442461.415:455): pid=5529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:01.644263 kubelet[2118]: E0517 00:41:01.632347 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:01.760792 kubelet[2118]: I0517 00:41:01.750400 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dfbcfbd6f-78x8j" podStartSLOduration=39.145247284 podStartE2EDuration="51.750376013s" podCreationTimestamp="2025-05-17 00:40:10 +0000 UTC" firstStartedPulling="2025-05-17 00:40:48.189952551 +0000 UTC m=+57.788400353" lastFinishedPulling="2025-05-17 00:41:00.79508128 +0000 UTC m=+70.393529082" observedRunningTime="2025-05-17 00:41:01.695745601 +0000 UTC m=+71.294193433" watchObservedRunningTime="2025-05-17 00:41:01.750376013 +0000 UTC m=+71.348823805" May 17 00:41:01.768000 audit[5540]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:01.768000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffde5f314b0 a2=0 a3=7ffde5f3149c items=0 ppid=2242 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.801144 kernel: audit: type=1325 audit(1747442461.768:456): table=filter:124 family=2 entries=12 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:01.801314 kernel: audit: type=1300 audit(1747442461.768:456): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffde5f314b0 a2=0 a3=7ffde5f3149c items=0 ppid=2242 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:01.812000 audit[5540]: NETFILTER_CFG table=nat:125 family=2 entries=30 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:01.812000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffde5f314b0 a2=0 a3=7ffde5f3149c items=0 ppid=2242 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:01.837000 audit[5542]: NETFILTER_CFG table=filter:126 family=2 entries=12 op=nft_register_rule pid=5542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:01.837000 audit[5542]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff2783b1a0 a2=0 a3=7fff2783b18c items=0 ppid=2242 pid=5542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:01.853000 audit[5542]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5542 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:01.853000 audit[5542]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff2783b1a0 a2=0 a3=7fff2783b18c items=0 ppid=2242 pid=5542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:01.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:02.061929 sshd[5526]: pam_unix(sshd:session): session closed for user core May 17 00:41:02.068413 systemd[1]: Started sshd@12-10.0.0.136:22-10.0.0.1:43476.service. May 17 00:41:02.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.136:22-10.0.0.1:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:02.068000 audit[5526]: USER_END pid=5526 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.068000 audit[5526]: CRED_DISP pid=5526 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.077020 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. May 17 00:41:02.079905 systemd[1]: sshd@11-10.0.0.136:22-10.0.0.1:43468.service: Deactivated successfully. May 17 00:41:02.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.136:22-10.0.0.1:43468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:02.085261 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:41:02.089016 systemd-logind[1293]: Removed session 12. May 17 00:41:02.165000 audit[5543]: USER_ACCT pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.166627 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 43476 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:02.168000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.168000 audit[5543]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc84ec1960 a2=3 a3=0 items=0 ppid=1 pid=5543 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:02.168000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:02.170092 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:02.189961 systemd[1]: Started session-13.scope. May 17 00:41:02.190380 systemd-logind[1293]: New session 13 of user core. May 17 00:41:02.249000 audit[5543]: USER_START pid=5543 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.255000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.136:22-10.0.0.1:43478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:02.838596 systemd[1]: Started sshd@13-10.0.0.136:22-10.0.0.1:43478.service. May 17 00:41:02.937688 sshd[5543]: pam_unix(sshd:session): session closed for user core May 17 00:41:02.942000 audit[5543]: USER_END pid=5543 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.942000 audit[5543]: CRED_DISP pid=5543 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:02.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.136:22-10.0.0.1:43476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:02.955478 systemd[1]: sshd@12-10.0.0.136:22-10.0.0.1:43476.service: Deactivated successfully. May 17 00:41:02.956655 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:41:02.968210 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. May 17 00:41:02.969861 systemd-logind[1293]: Removed session 13. May 17 00:41:03.043000 audit[5555]: USER_ACCT pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.045448 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 43478 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:03.045000 audit[5555]: CRED_ACQ pid=5555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.047305 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:03.045000 audit[5555]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf2facd50 a2=3 a3=0 items=0 ppid=1 pid=5555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:03.045000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:03.075299 systemd-logind[1293]: New session 14 of user core. May 17 00:41:03.088455 systemd[1]: Started session-14.scope. May 17 00:41:03.127000 audit[5555]: USER_START pid=5555 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.130000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.482939 sshd[5555]: pam_unix(sshd:session): session closed for user core May 17 00:41:03.485000 audit[5555]: USER_END pid=5555 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.485000 audit[5555]: CRED_DISP pid=5555 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:03.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.136:22-10.0.0.1:43478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:03.489340 systemd[1]: sshd@13-10.0.0.136:22-10.0.0.1:43478.service: Deactivated successfully. May 17 00:41:03.490393 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:41:03.500708 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. May 17 00:41:03.504521 systemd-logind[1293]: Removed session 14. May 17 00:41:04.156000 audit[5573]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=5573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:04.156000 audit[5573]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd20c577a0 a2=0 a3=7ffd20c5778c items=0 ppid=2242 pid=5573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:04.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:04.175000 audit[5573]: NETFILTER_CFG table=nat:129 family=2 entries=34 op=nft_register_chain pid=5573 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:04.175000 audit[5573]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd20c577a0 a2=0 a3=7ffd20c5778c items=0 ppid=2242 pid=5573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:04.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:04.218017 env[1303]: time="2025-05-17T00:41:04.217068393Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:04.243421 kubelet[2118]: I0517 00:41:04.238002 2118 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:41:04.255602 env[1303]: time="2025-05-17T00:41:04.253453514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:04.256065 env[1303]: time="2025-05-17T00:41:04.256030848Z" level=info msg="StopContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" with timeout 30 (s)" May 17 00:41:04.256596 env[1303]: time="2025-05-17T00:41:04.256561917Z" level=info msg="Stop container \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" with signal terminated" May 17 00:41:04.291529 env[1303]: time="2025-05-17T00:41:04.287093222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:04.306331 env[1303]: time="2025-05-17T00:41:04.303821331Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:04.306331 env[1303]: time="2025-05-17T00:41:04.304157028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:41:04.309216 env[1303]: time="2025-05-17T00:41:04.309171532Z" level=info msg="CreateContainer within sandbox \"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:41:04.312000 audit[5576]: NETFILTER_CFG table=filter:130 family=2 entries=12 op=nft_register_rule pid=5576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:04.312000 audit[5576]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffc90e2820 a2=0 a3=7fffc90e280c items=0 ppid=2242 pid=5576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:04.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:04.343000 audit[5576]: NETFILTER_CFG table=nat:131 family=2 entries=36 op=nft_register_rule pid=5576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:04.343000 audit[5576]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fffc90e2820 a2=0 a3=7fffc90e280c items=0 ppid=2242 pid=5576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:04.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:04.387997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108431295.mount: Deactivated successfully. May 17 00:41:04.428735 env[1303]: time="2025-05-17T00:41:04.427741466Z" level=info msg="CreateContainer within sandbox \"010b1f050267c2fbc68a176709e97b66f202d8a7f1e938cce8e12e96e88fe445\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0bb2560f282f8c87b9d5ae08ff7c97e10e1f9566d1a69d355b6583010602f731\"" May 17 00:41:04.428735 env[1303]: time="2025-05-17T00:41:04.428615827Z" level=info msg="StartContainer for \"0bb2560f282f8c87b9d5ae08ff7c97e10e1f9566d1a69d355b6583010602f731\"" May 17 00:41:04.473231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885-rootfs.mount: Deactivated successfully. May 17 00:41:04.482896 systemd[1]: run-containerd-runc-k8s.io-0bb2560f282f8c87b9d5ae08ff7c97e10e1f9566d1a69d355b6583010602f731-runc.Zvpr9f.mount: Deactivated successfully. May 17 00:41:04.484509 env[1303]: time="2025-05-17T00:41:04.483254033Z" level=info msg="shim disconnected" id=2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885 May 17 00:41:04.484509 env[1303]: time="2025-05-17T00:41:04.483311423Z" level=warning msg="cleaning up after shim disconnected" id=2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885 namespace=k8s.io May 17 00:41:04.484509 env[1303]: time="2025-05-17T00:41:04.483323004Z" level=info msg="cleaning up dead shim" May 17 00:41:04.514511 env[1303]: time="2025-05-17T00:41:04.514457114Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5610 runtime=io.containerd.runc.v2\n" May 17 00:41:04.665010 env[1303]: time="2025-05-17T00:41:04.664864913Z" level=info msg="StartContainer for \"0bb2560f282f8c87b9d5ae08ff7c97e10e1f9566d1a69d355b6583010602f731\" returns successfully" May 17 00:41:04.685309 env[1303]: time="2025-05-17T00:41:04.684789723Z" level=info msg="StopContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" returns successfully" May 17 00:41:04.685728 env[1303]: time="2025-05-17T00:41:04.685666499Z" level=info msg="StopPodSandbox for \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\"" May 17 00:41:04.685783 env[1303]: time="2025-05-17T00:41:04.685748223Z" level=info msg="Container to stop \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:04.765613 env[1303]: time="2025-05-17T00:41:04.763591681Z" level=info msg="shim disconnected" id=77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863 May 17 00:41:04.765613 env[1303]: time="2025-05-17T00:41:04.763643099Z" level=warning msg="cleaning up after shim disconnected" id=77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863 namespace=k8s.io May 17 00:41:04.765613 env[1303]: time="2025-05-17T00:41:04.764970240Z" level=info msg="cleaning up dead shim" May 17 00:41:04.787197 env[1303]: time="2025-05-17T00:41:04.787051716Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5662 runtime=io.containerd.runc.v2\n" May 17 00:41:05.030301 systemd-networkd[1079]: cali126211cea7a: Link DOWN May 17 00:41:05.030307 systemd-networkd[1079]: cali126211cea7a: Lost carrier May 17 00:41:05.077000 audit[5698]: NETFILTER_CFG table=filter:132 family=2 entries=59 op=nft_register_rule pid=5698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:05.077000 audit[5698]: SYSCALL arch=c000003e syscall=46 success=yes exit=9048 a0=3 a1=7fff5587c4d0 a2=0 a3=7fff5587c4bc items=0 ppid=3837 pid=5698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:05.077000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:41:05.081000 audit[5698]: NETFILTER_CFG table=filter:133 family=2 entries=4 op=nft_unregister_chain pid=5698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:05.081000 audit[5698]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7fff5587c4d0 a2=0 a3=564da76f5000 items=0 ppid=3837 pid=5698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:05.081000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.016 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.022 [INFO][5684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" iface="eth0" netns="/var/run/netns/cni-50d169e1-b7e8-a64c-c6b5-7c768d326a33" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.023 [INFO][5684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" iface="eth0" netns="/var/run/netns/cni-50d169e1-b7e8-a64c-c6b5-7c768d326a33" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.089 [INFO][5684] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" after=66.624642ms iface="eth0" netns="/var/run/netns/cni-50d169e1-b7e8-a64c-c6b5-7c768d326a33" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.090 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.090 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.153 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.153 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.153 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.266 [INFO][5700] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.266 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.273 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:05.295912 env[1303]: 2025-05-17 00:41:05.282 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:05.305834 env[1303]: time="2025-05-17T00:41:05.295918867Z" level=info msg="TearDown network for sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" successfully" May 17 00:41:05.305834 env[1303]: time="2025-05-17T00:41:05.295959504Z" level=info msg="StopPodSandbox for \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" returns successfully" May 17 00:41:05.387184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863-rootfs.mount: Deactivated successfully. May 17 00:41:05.388169 systemd[1]: run-netns-cni\x2d50d169e1\x2db7e8\x2da64c\x2dc6b5\x2d7c768d326a33.mount: Deactivated successfully. May 17 00:41:05.388844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863-shm.mount: Deactivated successfully. May 17 00:41:05.389000 audit[5708]: NETFILTER_CFG table=filter:134 family=2 entries=12 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:05.389000 audit[5708]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff749bd340 a2=0 a3=7fff749bd32c items=0 ppid=2242 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:05.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:05.403000 audit[5708]: NETFILTER_CFG table=nat:135 family=2 entries=36 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:05.403000 audit[5708]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff749bd340 a2=0 a3=7fff749bd32c items=0 ppid=2242 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:05.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:05.482769 kubelet[2118]: I0517 00:41:05.482668 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klt5v\" (UniqueName: \"kubernetes.io/projected/11364172-e58d-4414-85d0-e2ab3fb5d624-kube-api-access-klt5v\") pod \"11364172-e58d-4414-85d0-e2ab3fb5d624\" (UID: \"11364172-e58d-4414-85d0-e2ab3fb5d624\") " May 17 00:41:05.482769 kubelet[2118]: I0517 00:41:05.482770 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11364172-e58d-4414-85d0-e2ab3fb5d624-calico-apiserver-certs\") pod \"11364172-e58d-4414-85d0-e2ab3fb5d624\" (UID: \"11364172-e58d-4414-85d0-e2ab3fb5d624\") " May 17 00:41:05.499831 systemd[1]: var-lib-kubelet-pods-11364172\x2de58d\x2d4414\x2d85d0\x2de2ab3fb5d624-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 17 00:41:05.510192 systemd[1]: var-lib-kubelet-pods-11364172\x2de58d\x2d4414\x2d85d0\x2de2ab3fb5d624-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklt5v.mount: Deactivated successfully. May 17 00:41:05.510593 kubelet[2118]: I0517 00:41:05.510522 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11364172-e58d-4414-85d0-e2ab3fb5d624-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "11364172-e58d-4414-85d0-e2ab3fb5d624" (UID: "11364172-e58d-4414-85d0-e2ab3fb5d624"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:41:05.513227 kubelet[2118]: I0517 00:41:05.513190 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11364172-e58d-4414-85d0-e2ab3fb5d624-kube-api-access-klt5v" (OuterVolumeSpecName: "kube-api-access-klt5v") pod "11364172-e58d-4414-85d0-e2ab3fb5d624" (UID: "11364172-e58d-4414-85d0-e2ab3fb5d624"). InnerVolumeSpecName "kube-api-access-klt5v". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:41:05.583100 kubelet[2118]: I0517 00:41:05.583046 2118 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klt5v\" (UniqueName: \"kubernetes.io/projected/11364172-e58d-4414-85d0-e2ab3fb5d624-kube-api-access-klt5v\") on node \"localhost\" DevicePath \"\"" May 17 00:41:05.583366 kubelet[2118]: I0517 00:41:05.583348 2118 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11364172-e58d-4414-85d0-e2ab3fb5d624-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 17 00:41:05.732257 kubelet[2118]: I0517 00:41:05.726395 2118 scope.go:117] "RemoveContainer" containerID="2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885" May 17 00:41:05.739709 env[1303]: time="2025-05-17T00:41:05.735524044Z" level=info msg="RemoveContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\"" May 17 00:41:05.753491 kubelet[2118]: I0517 00:41:05.753459 2118 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:41:05.753879 kubelet[2118]: I0517 00:41:05.753868 2118 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:41:05.763204 env[1303]: time="2025-05-17T00:41:05.760361503Z" level=info msg="RemoveContainer for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" returns successfully" May 17 00:41:05.766386 kubelet[2118]: I0517 00:41:05.766340 2118 scope.go:117] "RemoveContainer" containerID="2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885" May 17 00:41:05.774162 env[1303]: time="2025-05-17T00:41:05.773984599Z" level=error msg="ContainerStatus for \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\": not found" May 17 00:41:05.774738 kubelet[2118]: E0517 00:41:05.774708 2118 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\": not found" containerID="2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885" May 17 00:41:05.774889 kubelet[2118]: I0517 00:41:05.774857 2118 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885"} err="failed to get container status \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d29de12ced157fe3ca9bff7f1eec918c76bedff2065f33e43a6909361cec885\": not found" May 17 00:41:05.777368 kubelet[2118]: I0517 00:41:05.777284 2118 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t72dz" podStartSLOduration=37.382029493 podStartE2EDuration="53.777256723s" podCreationTimestamp="2025-05-17 00:40:12 +0000 UTC" firstStartedPulling="2025-05-17 00:40:47.912198757 +0000 UTC m=+57.510646569" lastFinishedPulling="2025-05-17 00:41:04.307425997 +0000 UTC m=+73.905873799" observedRunningTime="2025-05-17 00:41:05.764956959 +0000 UTC m=+75.363404761" watchObservedRunningTime="2025-05-17 00:41:05.777256723 +0000 UTC m=+75.375704525" May 17 00:41:06.544242 kubelet[2118]: I0517 00:41:06.544192 2118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11364172-e58d-4414-85d0-e2ab3fb5d624" path="/var/lib/kubelet/pods/11364172-e58d-4414-85d0-e2ab3fb5d624/volumes" May 17 00:41:06.779735 systemd[1]: run-containerd-runc-k8s.io-91d8b53621d638b132841871a76978e3b01079f5e8ec8863fd5caa72f6bc7b6a-runc.RDeEHG.mount: Deactivated successfully. May 17 00:41:08.503675 systemd[1]: Started sshd@14-10.0.0.136:22-10.0.0.1:57828.service. May 17 00:41:08.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:57828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:08.506242 kernel: kauditd_printk_skb: 59 callbacks suppressed May 17 00:41:08.506311 kernel: audit: type=1130 audit(1747442468.502:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:57828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:08.601724 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 57828 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:08.597000 audit[5738]: USER_ACCT pid=5738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.605278 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:08.597000 audit[5738]: CRED_ACQ pid=5738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.633393 kernel: audit: type=1101 audit(1747442468.597:490): pid=5738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.633545 kernel: audit: type=1103 audit(1747442468.597:491): pid=5738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.633569 kernel: audit: type=1006 audit(1747442468.597:492): pid=5738 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:41:08.649234 kernel: audit: type=1300 audit(1747442468.597:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e9d8560 a2=3 a3=0 items=0 ppid=1 pid=5738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:08.649353 kernel: audit: type=1327 audit(1747442468.597:492): proctitle=737368643A20636F7265205B707269765D May 17 00:41:08.597000 audit[5738]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8e9d8560 a2=3 a3=0 items=0 ppid=1 pid=5738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:08.597000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:08.646438 systemd[1]: Started session-15.scope. May 17 00:41:08.647652 systemd-logind[1293]: New session 15 of user core. May 17 00:41:08.688000 audit[5738]: USER_START pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.690000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.705063 kernel: audit: type=1105 audit(1747442468.688:493): pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:08.705228 kernel: audit: type=1103 audit(1747442468.690:494): pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:09.225444 sshd[5738]: pam_unix(sshd:session): session closed for user core May 17 00:41:09.230000 audit[5738]: USER_END pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:09.234059 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. May 17 00:41:09.230000 audit[5738]: CRED_DISP pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:09.246219 kernel: audit: type=1106 audit(1747442469.230:495): pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:09.246302 kernel: audit: type=1104 audit(1747442469.230:496): pid=5738 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:09.246077 systemd[1]: sshd@14-10.0.0.136:22-10.0.0.1:57828.service: Deactivated successfully. May 17 00:41:09.247074 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:41:09.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.136:22-10.0.0.1:57828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:09.250083 systemd-logind[1293]: Removed session 15. May 17 00:41:09.488131 env[1303]: time="2025-05-17T00:41:09.487761555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:41:09.761354 env[1303]: time="2025-05-17T00:41:09.760524873Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:09.764146 env[1303]: time="2025-05-17T00:41:09.764055050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:09.764391 kubelet[2118]: E0517 00:41:09.764340 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:41:09.764759 kubelet[2118]: E0517 00:41:09.764398 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:41:09.764759 kubelet[2118]: E0517 00:41:09.764531 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fb95b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-6mszs_calico-system(4c9eee5c-cc38-4032-8b07-e8d97094a990): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:09.776834 kubelet[2118]: E0517 00:41:09.773093 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:41:10.496709 kubelet[2118]: E0517 00:41:10.491205 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:11.486191 kubelet[2118]: E0517 00:41:11.485730 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:12.491140 kubelet[2118]: E0517 00:41:12.491067 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:14.225508 systemd[1]: Started sshd@15-10.0.0.136:22-10.0.0.1:59156.service. May 17 00:41:14.234159 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:14.234314 kernel: audit: type=1130 audit(1747442474.225:498): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:59156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:14.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:59156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:14.399000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.408122 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:14.417000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.420571 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:14.433205 kernel: audit: type=1101 audit(1747442474.399:499): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.433363 kernel: audit: type=1103 audit(1747442474.417:500): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.455519 kernel: audit: type=1006 audit(1747442474.417:501): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 00:41:14.455648 kernel: audit: type=1300 audit(1747442474.417:501): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc48f69960 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:14.417000 audit[5759]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc48f69960 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:14.449905 systemd[1]: Started session-16.scope. May 17 00:41:14.451092 systemd-logind[1293]: New session 16 of user core. May 17 00:41:14.417000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:14.489814 kernel: audit: type=1327 audit(1747442474.417:501): proctitle=737368643A20636F7265205B707269765D May 17 00:41:14.489992 kernel: audit: type=1105 audit(1747442474.474:502): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.474000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.483000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.502632 kernel: audit: type=1103 audit(1747442474.483:503): pid=5762 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.925840 sshd[5759]: pam_unix(sshd:session): session closed for user core May 17 00:41:14.927000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.931547 systemd[1]: sshd@15-10.0.0.136:22-10.0.0.1:59156.service: Deactivated successfully. May 17 00:41:14.933035 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:41:14.933721 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. May 17 00:41:14.934917 systemd-logind[1293]: Removed session 16. May 17 00:41:14.927000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.947100 kernel: audit: type=1106 audit(1747442474.927:504): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.947285 kernel: audit: type=1104 audit(1747442474.927:505): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:14.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.136:22-10.0.0.1:59156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:16.490998 env[1303]: time="2025-05-17T00:41:16.488665507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:41:16.842312 env[1303]: time="2025-05-17T00:41:16.841927552Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:16.867742 env[1303]: time="2025-05-17T00:41:16.863686165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:16.867937 kubelet[2118]: E0517 00:41:16.863998 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:41:16.867937 kubelet[2118]: E0517 00:41:16.864064 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:41:16.867937 kubelet[2118]: E0517 00:41:16.864206 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bfa66715bacc4f5c85e952a4173b2591,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:16.877989 env[1303]: time="2025-05-17T00:41:16.877562886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:41:17.114558 env[1303]: time="2025-05-17T00:41:17.114024725Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:17.140099 env[1303]: time="2025-05-17T00:41:17.140006232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:17.140612 kubelet[2118]: E0517 00:41:17.140350 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:17.140612 kubelet[2118]: E0517 00:41:17.140424 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:17.140612 kubelet[2118]: E0517 00:41:17.140554 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:17.150905 kubelet[2118]: E0517 00:41:17.142217 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:19.958650 systemd[1]: Started sshd@16-10.0.0.136:22-10.0.0.1:59164.service. May 17 00:41:19.977229 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:19.977362 kernel: audit: type=1130 audit(1747442479.966:507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:59164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:19.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:59164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:20.047000 audit[5775]: USER_ACCT pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.051832 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 59164 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:20.054882 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:20.053000 audit[5775]: CRED_ACQ pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.063608 systemd-logind[1293]: New session 17 of user core. May 17 00:41:20.069920 kernel: audit: type=1101 audit(1747442480.047:508): pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.070138 kernel: audit: type=1103 audit(1747442480.053:509): pid=5775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.064749 systemd[1]: Started session-17.scope. May 17 00:41:20.053000 audit[5775]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd467996b0 a2=3 a3=0 items=0 ppid=1 pid=5775 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:20.084524 kernel: audit: type=1006 audit(1747442480.053:510): pid=5775 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 17 00:41:20.084632 kernel: audit: type=1300 audit(1747442480.053:510): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd467996b0 a2=3 a3=0 items=0 ppid=1 pid=5775 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:20.053000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:20.088582 kernel: audit: type=1327 audit(1747442480.053:510): proctitle=737368643A20636F7265205B707269765D May 17 00:41:20.088661 kernel: audit: type=1105 audit(1747442480.080:511): pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.080000 audit[5775]: USER_START pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.098332 kernel: audit: type=1103 audit(1747442480.081:512): pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.081000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.283466 sshd[5775]: pam_unix(sshd:session): session closed for user core May 17 00:41:20.293099 kernel: audit: type=1106 audit(1747442480.284:513): pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.293212 kernel: audit: type=1104 audit(1747442480.284:514): pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.284000 audit[5775]: USER_END pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.284000 audit[5775]: CRED_DISP pid=5775 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:20.290342 systemd[1]: sshd@16-10.0.0.136:22-10.0.0.1:59164.service: Deactivated successfully. May 17 00:41:20.295592 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:41:20.299015 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. May 17 00:41:20.300868 systemd-logind[1293]: Removed session 17. May 17 00:41:20.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.136:22-10.0.0.1:59164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:23.487220 kubelet[2118]: E0517 00:41:23.487183 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:23.488923 kubelet[2118]: E0517 00:41:23.488874 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:41:25.287414 systemd[1]: Started sshd@17-10.0.0.136:22-10.0.0.1:49538.service. May 17 00:41:25.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.136:22-10.0.0.1:49538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.302015 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:25.302144 kernel: audit: type=1130 audit(1747442485.288:516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.136:22-10.0.0.1:49538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.361000 audit[5791]: USER_ACCT pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.362333 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 49538 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:25.368847 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:25.371136 kernel: audit: type=1101 audit(1747442485.361:517): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.371204 kernel: audit: type=1103 audit(1747442485.368:518): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.368000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.390166 kernel: audit: type=1006 audit(1747442485.368:519): pid=5791 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 17 00:41:25.390312 kernel: audit: type=1300 audit(1747442485.368:519): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd620ac20 a2=3 a3=0 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:25.390345 kernel: audit: type=1327 audit(1747442485.368:519): proctitle=737368643A20636F7265205B707269765D May 17 00:41:25.368000 audit[5791]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd620ac20 a2=3 a3=0 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:25.368000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:25.382299 systemd[1]: Started session-18.scope. May 17 00:41:25.384567 systemd-logind[1293]: New session 18 of user core. May 17 00:41:25.390000 audit[5791]: USER_START pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.394000 audit[5794]: CRED_ACQ pid=5794 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.412534 kernel: audit: type=1105 audit(1747442485.390:520): pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.412757 kernel: audit: type=1103 audit(1747442485.394:521): pid=5794 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.592429 sshd[5791]: pam_unix(sshd:session): session closed for user core May 17 00:41:25.593000 audit[5791]: USER_END pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.594000 audit[5791]: CRED_DISP pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.601709 systemd[1]: sshd@17-10.0.0.136:22-10.0.0.1:49538.service: Deactivated successfully. May 17 00:41:25.602739 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:41:25.603986 kernel: audit: type=1106 audit(1747442485.593:522): pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.604166 kernel: audit: type=1104 audit(1747442485.594:523): pid=5791 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:25.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.136:22-10.0.0.1:49538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.609472 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. May 17 00:41:25.610581 systemd-logind[1293]: Removed session 18. May 17 00:41:30.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.136:22-10.0.0.1:49546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.593175 systemd[1]: Started sshd@18-10.0.0.136:22-10.0.0.1:49546.service. May 17 00:41:30.595323 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:30.595378 kernel: audit: type=1130 audit(1747442490.590:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.136:22-10.0.0.1:49546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.678000 audit[5814]: USER_ACCT pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.679626 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 49546 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:30.684912 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:30.697653 kernel: audit: type=1101 audit(1747442490.678:526): pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.697832 kernel: audit: type=1103 audit(1747442490.684:527): pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.684000 audit[5814]: CRED_ACQ pid=5814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.708464 kernel: audit: type=1006 audit(1747442490.684:528): pid=5814 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 May 17 00:41:30.715300 kernel: audit: type=1300 audit(1747442490.684:528): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc79d04490 a2=3 a3=0 items=0 ppid=1 pid=5814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:30.684000 audit[5814]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc79d04490 a2=3 a3=0 items=0 ppid=1 pid=5814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:30.684000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:30.722837 systemd[1]: Started session-19.scope. May 17 00:41:30.723501 systemd-logind[1293]: New session 19 of user core. May 17 00:41:30.727483 kernel: audit: type=1327 audit(1747442490.684:528): proctitle=737368643A20636F7265205B707269765D May 17 00:41:30.753000 audit[5814]: USER_START pid=5814 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.758000 audit[5817]: CRED_ACQ pid=5817 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.768449 kernel: audit: type=1105 audit(1747442490.753:529): pid=5814 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.768606 kernel: audit: type=1103 audit(1747442490.758:530): pid=5817 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.136:22-10.0.0.1:49548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.967293 systemd[1]: Started sshd@19-10.0.0.136:22-10.0.0.1:49548.service. May 17 00:41:30.972143 kernel: audit: type=1130 audit(1747442490.967:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.136:22-10.0.0.1:49548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.964063 sshd[5814]: pam_unix(sshd:session): session closed for user core May 17 00:41:30.988322 kernel: audit: type=1106 audit(1747442490.977:532): pid=5814 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.977000 audit[5814]: USER_END pid=5814 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.991000 audit[5814]: CRED_DISP pid=5814 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:30.994121 systemd[1]: sshd@18-10.0.0.136:22-10.0.0.1:49546.service: Deactivated successfully. May 17 00:41:30.996701 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:41:31.000586 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. May 17 00:41:30.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.136:22-10.0.0.1:49546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:31.006623 systemd-logind[1293]: Removed session 19. May 17 00:41:31.086000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:31.087263 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 49548 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:31.088000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:31.089000 audit[5828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1b7d0c10 a2=3 a3=0 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:31.089000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:31.089838 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:31.126729 systemd-logind[1293]: New session 20 of user core. May 17 00:41:31.130537 systemd[1]: Started session-20.scope. May 17 00:41:31.157000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:31.159000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:31.498815 kubelet[2118]: E0517 00:41:31.496825 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:32.038135 sshd[5828]: pam_unix(sshd:session): session closed for user core May 17 00:41:32.039000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:32.039000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:32.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.136:22-10.0.0.1:49564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.040148 systemd[1]: Started sshd@20-10.0.0.136:22-10.0.0.1:49564.service. May 17 00:41:32.048331 systemd[1]: sshd@19-10.0.0.136:22-10.0.0.1:49548.service: Deactivated successfully. May 17 00:41:32.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.136:22-10.0.0.1:49548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:32.050006 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:41:32.050684 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. May 17 00:41:32.051765 systemd-logind[1293]: Removed session 20. May 17 00:41:32.097000 audit[5860]: USER_ACCT pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:32.097662 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 49564 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:32.099000 audit[5860]: CRED_ACQ pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:32.099000 audit[5860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff797c47c0 a2=3 a3=0 items=0 ppid=1 pid=5860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:32.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:32.099891 sshd[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:32.109414 systemd-logind[1293]: New session 21 of user core. May 17 00:41:32.109885 systemd[1]: Started session-21.scope. May 17 00:41:32.128000 audit[5860]: USER_START pid=5860 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:32.130000 audit[5865]: CRED_ACQ pid=5865 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:33.485780 kubelet[2118]: E0517 00:41:33.485723 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:33.775000 audit[5876]: NETFILTER_CFG table=filter:136 family=2 entries=12 op=nft_register_rule pid=5876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:33.775000 audit[5876]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc32a1fc00 a2=0 a3=7ffc32a1fbec items=0 ppid=2242 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:33.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:33.792000 audit[5876]: NETFILTER_CFG table=nat:137 family=2 entries=36 op=nft_register_rule pid=5876 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:33.792000 audit[5876]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc32a1fc00 a2=0 a3=7ffc32a1fbec items=0 ppid=2242 pid=5876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:33.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:35.295000 audit[5878]: NETFILTER_CFG table=filter:138 family=2 entries=24 op=nft_register_rule pid=5878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:35.295000 audit[5878]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffc89eb3bc0 a2=0 a3=7ffc89eb3bac items=0 ppid=2242 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:35.313000 audit[5878]: NETFILTER_CFG table=nat:139 family=2 entries=22 op=nft_register_rule pid=5878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:35.313000 audit[5878]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc89eb3bc0 a2=0 a3=0 items=0 ppid=2242 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:35.333000 audit[5880]: NETFILTER_CFG table=filter:140 family=2 entries=36 op=nft_register_rule pid=5880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:35.333000 audit[5880]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffecd3377c0 a2=0 a3=7ffecd3377ac items=0 ppid=2242 pid=5880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:35.351400 sshd[5860]: pam_unix(sshd:session): session closed for user core May 17 00:41:35.350000 audit[5880]: NETFILTER_CFG table=nat:141 family=2 entries=22 op=nft_register_rule pid=5880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:35.350000 audit[5880]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffecd3377c0 a2=0 a3=0 items=0 ppid=2242 pid=5880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:35.357080 systemd[1]: Started sshd@21-10.0.0.136:22-10.0.0.1:36568.service. May 17 00:41:35.360000 audit[5860]: USER_END pid=5860 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.136:22-10.0.0.1:36568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.360000 audit[5860]: CRED_DISP pid=5860 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.136:22-10.0.0.1:49564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.364198 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. May 17 00:41:35.364362 systemd[1]: sshd@20-10.0.0.136:22-10.0.0.1:49564.service: Deactivated successfully. May 17 00:41:35.366653 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:41:35.372322 systemd-logind[1293]: Removed session 21. May 17 00:41:35.451000 audit[5881]: USER_ACCT pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.458000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.459000 audit[5881]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe878f7e20 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.459000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:35.460898 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:35.468657 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:35.473875 systemd[1]: Started session-22.scope. May 17 00:41:35.475170 systemd-logind[1293]: New session 22 of user core. May 17 00:41:35.525000 audit[5881]: USER_START pid=5881 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.527000 audit[5886]: CRED_ACQ pid=5886 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:35.707660 env[1303]: time="2025-05-17T00:41:35.706803614Z" level=info msg="StopContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" with timeout 30 (s)" May 17 00:41:35.707660 env[1303]: time="2025-05-17T00:41:35.707259727Z" level=info msg="Stop container \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" with signal terminated" May 17 00:41:35.857992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01-rootfs.mount: Deactivated successfully. May 17 00:41:35.867400 env[1303]: time="2025-05-17T00:41:35.867344322Z" level=info msg="shim disconnected" id=57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01 May 17 00:41:35.867613 env[1303]: time="2025-05-17T00:41:35.867586751Z" level=warning msg="cleaning up after shim disconnected" id=57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01 namespace=k8s.io May 17 00:41:35.867739 env[1303]: time="2025-05-17T00:41:35.867717248Z" level=info msg="cleaning up dead shim" May 17 00:41:35.902402 env[1303]: time="2025-05-17T00:41:35.902332876Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5910 runtime=io.containerd.runc.v2\n" May 17 00:41:36.390356 env[1303]: time="2025-05-17T00:41:36.390291222Z" level=info msg="StopContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" returns successfully" May 17 00:41:36.392346 env[1303]: time="2025-05-17T00:41:36.392298996Z" level=info msg="StopPodSandbox for \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\"" May 17 00:41:36.392577 env[1303]: time="2025-05-17T00:41:36.392548668Z" level=info msg="Container to stop \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.402560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d-shm.mount: Deactivated successfully. May 17 00:41:36.473180 kernel: kauditd_printk_skb: 49 callbacks suppressed May 17 00:41:36.473363 kernel: audit: type=1325 audit(1747442496.466:564): table=filter:142 family=2 entries=36 op=nft_register_rule pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:36.466000 audit[5930]: NETFILTER_CFG table=filter:142 family=2 entries=36 op=nft_register_rule pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:36.466000 audit[5930]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffced36ec60 a2=0 a3=7ffced36ec4c items=0 ppid=2242 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:36.497843 kernel: audit: type=1300 audit(1747442496.466:564): arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffced36ec60 a2=0 a3=7ffced36ec4c items=0 ppid=2242 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:36.498016 kernel: audit: type=1327 audit(1747442496.466:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:36.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:36.519000 audit[5930]: NETFILTER_CFG table=nat:143 family=2 entries=38 op=nft_register_chain pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:36.528762 kernel: audit: type=1325 audit(1747442496.519:565): table=nat:143 family=2 entries=38 op=nft_register_chain pid=5930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:36.528848 kernel: audit: type=1300 audit(1747442496.519:565): arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffced36ec60 a2=0 a3=7ffced36ec4c items=0 ppid=2242 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:36.519000 audit[5930]: SYSCALL arch=c000003e syscall=46 success=yes exit=11364 a0=3 a1=7ffced36ec60 a2=0 a3=7ffced36ec4c items=0 ppid=2242 pid=5930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:36.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:36.558146 kernel: audit: type=1327 audit(1747442496.519:565): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:36.569565 env[1303]: time="2025-05-17T00:41:36.569397592Z" level=info msg="shim disconnected" id=bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d May 17 00:41:36.569565 env[1303]: time="2025-05-17T00:41:36.569466813Z" level=warning msg="cleaning up after shim disconnected" id=bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d namespace=k8s.io May 17 00:41:36.569565 env[1303]: time="2025-05-17T00:41:36.569477873Z" level=info msg="cleaning up dead shim" May 17 00:41:36.575473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d-rootfs.mount: Deactivated successfully. May 17 00:41:36.600147 env[1303]: time="2025-05-17T00:41:36.599745910Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5946 runtime=io.containerd.runc.v2\n" May 17 00:41:36.946175 kubelet[2118]: I0517 00:41:36.935313 2118 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:37.035897 systemd-networkd[1079]: califcd07a666c0: Link DOWN May 17 00:41:37.035908 systemd-networkd[1079]: califcd07a666c0: Lost carrier May 17 00:41:37.120430 kernel: audit: type=1325 audit(1747442497.108:566): table=filter:144 family=2 entries=59 op=nft_register_rule pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:37.120616 kernel: audit: type=1300 audit(1747442497.108:566): arch=c000003e syscall=46 success=yes exit=9064 a0=3 a1=7ffd6387ddb0 a2=0 a3=7ffd6387dd9c items=0 ppid=3837 pid=6010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.108000 audit[6010]: NETFILTER_CFG table=filter:144 family=2 entries=59 op=nft_register_rule pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:37.108000 audit[6010]: SYSCALL arch=c000003e syscall=46 success=yes exit=9064 a0=3 a1=7ffd6387ddb0 a2=0 a3=7ffd6387dd9c items=0 ppid=3837 pid=6010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.108000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:41:37.128840 kernel: audit: type=1327 audit(1747442497.108:566): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:41:37.109000 audit[6010]: NETFILTER_CFG table=filter:145 family=2 entries=4 op=nft_unregister_chain pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:37.134298 kernel: audit: type=1325 audit(1747442497.109:567): table=filter:145 family=2 entries=4 op=nft_unregister_chain pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:41:37.109000 audit[6010]: SYSCALL arch=c000003e syscall=46 success=yes exit=560 a0=3 a1=7ffd6387ddb0 a2=0 a3=555eda1e7000 items=0 ppid=3837 pid=6010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.109000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.033 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.034 [INFO][5969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" iface="eth0" netns="/var/run/netns/cni-96c718f8-bb73-4045-5459-8b8c52e67728" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.034 [INFO][5969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" iface="eth0" netns="/var/run/netns/cni-96c718f8-bb73-4045-5459-8b8c52e67728" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.086 [INFO][5969] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" after=52.339172ms iface="eth0" netns="/var/run/netns/cni-96c718f8-bb73-4045-5459-8b8c52e67728" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.087 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.087 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.134 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.134 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.135 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.315 [INFO][6005] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.315 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.319 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:37.325714 env[1303]: 2025-05-17 00:41:37.322 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:37.330762 systemd[1]: run-netns-cni\x2d96c718f8\x2dbb73\x2d4045\x2d5459\x2d8b8c52e67728.mount: Deactivated successfully. May 17 00:41:37.349087 env[1303]: time="2025-05-17T00:41:37.349018593Z" level=info msg="TearDown network for sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" successfully" May 17 00:41:37.349352 env[1303]: time="2025-05-17T00:41:37.349324762Z" level=info msg="StopPodSandbox for \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" returns successfully" May 17 00:41:37.453000 audit[6014]: NETFILTER_CFG table=filter:146 family=2 entries=36 op=nft_register_rule pid=6014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:37.453000 audit[6014]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffde4eb65c0 a2=0 a3=7ffde4eb65ac items=0 ppid=2242 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:37.459000 audit[6014]: NETFILTER_CFG table=nat:147 family=2 entries=36 op=nft_register_rule pid=6014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:37.459000 audit[6014]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffde4eb65c0 a2=0 a3=7ffde4eb65ac items=0 ppid=2242 pid=6014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:37.486989 env[1303]: time="2025-05-17T00:41:37.486690550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:41:37.489461 sshd[5881]: pam_unix(sshd:session): session closed for user core May 17 00:41:37.499573 systemd[1]: Started sshd@22-10.0.0.136:22-10.0.0.1:36578.service. May 17 00:41:37.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.136:22-10.0.0.1:36578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.515636 kubelet[2118]: I0517 00:41:37.515535 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls964\" (UniqueName: \"kubernetes.io/projected/272cb491-0dd7-4c17-9835-83fe9d59eb06-kube-api-access-ls964\") pod \"272cb491-0dd7-4c17-9835-83fe9d59eb06\" (UID: \"272cb491-0dd7-4c17-9835-83fe9d59eb06\") " May 17 00:41:37.515855 kubelet[2118]: I0517 00:41:37.515665 2118 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/272cb491-0dd7-4c17-9835-83fe9d59eb06-calico-apiserver-certs\") pod \"272cb491-0dd7-4c17-9835-83fe9d59eb06\" (UID: \"272cb491-0dd7-4c17-9835-83fe9d59eb06\") " May 17 00:41:37.528000 audit[5881]: USER_END pid=5881 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.528000 audit[5881]: CRED_DISP pid=5881 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.532020 systemd[1]: sshd@21-10.0.0.136:22-10.0.0.1:36568.service: Deactivated successfully. May 17 00:41:37.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.136:22-10.0.0.1:36568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.555294 kubelet[2118]: I0517 00:41:37.549198 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272cb491-0dd7-4c17-9835-83fe9d59eb06-kube-api-access-ls964" (OuterVolumeSpecName: "kube-api-access-ls964") pod "272cb491-0dd7-4c17-9835-83fe9d59eb06" (UID: "272cb491-0dd7-4c17-9835-83fe9d59eb06"). InnerVolumeSpecName "kube-api-access-ls964". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:41:37.555678 kubelet[2118]: I0517 00:41:37.555633 2118 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272cb491-0dd7-4c17-9835-83fe9d59eb06-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "272cb491-0dd7-4c17-9835-83fe9d59eb06" (UID: "272cb491-0dd7-4c17-9835-83fe9d59eb06"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:41:37.563880 systemd[1]: var-lib-kubelet-pods-272cb491\x2d0dd7\x2d4c17\x2d9835\x2d83fe9d59eb06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dls964.mount: Deactivated successfully. May 17 00:41:37.565073 systemd[1]: var-lib-kubelet-pods-272cb491\x2d0dd7\x2d4c17\x2d9835\x2d83fe9d59eb06-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 17 00:41:37.566882 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:41:37.576039 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. May 17 00:41:37.580695 systemd-logind[1293]: Removed session 22. May 17 00:41:37.591000 audit[6015]: USER_ACCT pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.596593 sshd[6015]: Accepted publickey for core from 10.0.0.1 port 36578 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:37.596000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.596000 audit[6015]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1d605da0 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.596000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:37.603268 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:37.617226 kubelet[2118]: I0517 00:41:37.616409 2118 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/272cb491-0dd7-4c17-9835-83fe9d59eb06-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 17 00:41:37.617226 kubelet[2118]: I0517 00:41:37.616439 2118 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls964\" (UniqueName: \"kubernetes.io/projected/272cb491-0dd7-4c17-9835-83fe9d59eb06-kube-api-access-ls964\") on node \"localhost\" DevicePath \"\"" May 17 00:41:37.626660 systemd[1]: Started session-23.scope. May 17 00:41:37.631691 systemd-logind[1293]: New session 23 of user core. May 17 00:41:37.650000 audit[6015]: USER_START pid=6015 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.658000 audit[6022]: CRED_ACQ pid=6022 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.751981 env[1303]: time="2025-05-17T00:41:37.751745870Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:37.755868 env[1303]: time="2025-05-17T00:41:37.755703178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:37.765405 kubelet[2118]: E0517 00:41:37.756283 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:41:37.765405 kubelet[2118]: E0517 00:41:37.756361 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:41:37.765405 kubelet[2118]: E0517 00:41:37.756522 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fb95b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-6mszs_calico-system(4c9eee5c-cc38-4032-8b07-e8d97094a990): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:37.765405 kubelet[2118]: E0517 00:41:37.758525 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:41:37.872321 sshd[6015]: pam_unix(sshd:session): session closed for user core May 17 00:41:37.874000 audit[6015]: USER_END pid=6015 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.875000 audit[6015]: CRED_DISP pid=6015 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:37.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.136:22-10.0.0.1:36578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.881572 systemd[1]: sshd@22-10.0.0.136:22-10.0.0.1:36578.service: Deactivated successfully. May 17 00:41:37.883426 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:41:37.887153 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. May 17 00:41:37.895216 systemd-logind[1293]: Removed session 23. May 17 00:41:38.488481 kubelet[2118]: I0517 00:41:38.487271 2118 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272cb491-0dd7-4c17-9835-83fe9d59eb06" path="/var/lib/kubelet/pods/272cb491-0dd7-4c17-9835-83fe9d59eb06/volumes" May 17 00:41:40.158811 kernel: hrtimer: interrupt took 11696779 ns May 17 00:41:42.886907 systemd[1]: Started sshd@23-10.0.0.136:22-10.0.0.1:36580.service. May 17 00:41:42.904997 kernel: kauditd_printk_skb: 22 callbacks suppressed May 17 00:41:42.907323 kernel: audit: type=1130 audit(1747442502.884:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:42.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:42.975000 audit[6035]: USER_ACCT pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:42.978296 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 36580 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:42.986195 kernel: audit: type=1101 audit(1747442502.975:583): pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:42.986000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:42.988086 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:43.005136 kernel: audit: type=1103 audit(1747442502.986:584): pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.005317 kernel: audit: type=1006 audit(1747442502.986:585): pid=6035 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 00:41:43.005688 systemd[1]: Started session-24.scope. May 17 00:41:42.986000 audit[6035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd730ee440 a2=3 a3=0 items=0 ppid=1 pid=6035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:43.007800 systemd-logind[1293]: New session 24 of user core. May 17 00:41:43.015168 kernel: audit: type=1300 audit(1747442502.986:585): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd730ee440 a2=3 a3=0 items=0 ppid=1 pid=6035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:42.986000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:43.010000 audit[6035]: USER_START pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.037357 kernel: audit: type=1327 audit(1747442502.986:585): proctitle=737368643A20636F7265205B707269765D May 17 00:41:43.040867 kernel: audit: type=1105 audit(1747442503.010:586): pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.040944 kernel: audit: type=1103 audit(1747442503.019:587): pid=6038 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.019000 audit[6038]: CRED_ACQ pid=6038 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.289000 audit[6035]: USER_END pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.293836 systemd[1]: sshd@23-10.0.0.136:22-10.0.0.1:36580.service: Deactivated successfully. May 17 00:41:43.290528 sshd[6035]: pam_unix(sshd:session): session closed for user core May 17 00:41:43.289000 audit[6035]: CRED_DISP pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.299324 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:41:43.301094 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. May 17 00:41:43.302200 systemd-logind[1293]: Removed session 24. May 17 00:41:43.305948 kernel: audit: type=1106 audit(1747442503.289:588): pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.306145 kernel: audit: type=1104 audit(1747442503.289:589): pid=6035 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:43.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.136:22-10.0.0.1:36580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:44.492682 env[1303]: time="2025-05-17T00:41:44.492642472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:41:44.827715 env[1303]: time="2025-05-17T00:41:44.827639443Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:44.829923 env[1303]: time="2025-05-17T00:41:44.829866911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:44.834016 kubelet[2118]: E0517 00:41:44.833419 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:41:44.834016 kubelet[2118]: E0517 00:41:44.833480 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:41:44.834016 kubelet[2118]: E0517 00:41:44.833599 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bfa66715bacc4f5c85e952a4173b2591,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:44.838658 env[1303]: time="2025-05-17T00:41:44.838617079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:41:45.112583 env[1303]: time="2025-05-17T00:41:45.112400004Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:41:45.113871 env[1303]: time="2025-05-17T00:41:45.113818913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:41:45.114126 kubelet[2118]: E0517 00:41:45.114056 2118 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:45.114209 kubelet[2118]: E0517 00:41:45.114136 2118 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:41:45.114324 kubelet[2118]: E0517 00:41:45.114273 2118 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8568d5-d7fx5_calico-system(89fcd0a8-8017-46e2-b5fb-22df060c0c43): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:41:45.127601 kubelet[2118]: E0517 00:41:45.126273 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:45.932000 audit[6052]: NETFILTER_CFG table=filter:148 family=2 entries=24 op=nft_register_rule pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:45.932000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd3e1e8c50 a2=0 a3=7ffd3e1e8c3c items=0 ppid=2242 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:45.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:45.941000 audit[6052]: NETFILTER_CFG table=nat:149 family=2 entries=106 op=nft_register_chain pid=6052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:41:45.941000 audit[6052]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd3e1e8c50 a2=0 a3=7ffd3e1e8c3c items=0 ppid=2242 pid=6052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:45.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:41:48.300029 systemd[1]: Started sshd@24-10.0.0.136:22-10.0.0.1:34962.service. May 17 00:41:48.314829 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:41:48.314978 kernel: audit: type=1130 audit(1747442508.299:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.136:22-10.0.0.1:34962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:48.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.136:22-10.0.0.1:34962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:48.406000 audit[6056]: USER_ACCT pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.412365 sshd[6056]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:48.428466 kernel: audit: type=1101 audit(1747442508.406:594): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.429363 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:48.428000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.438006 kernel: audit: type=1103 audit(1747442508.428:595): pid=6056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.438074 kernel: audit: type=1006 audit(1747442508.428:596): pid=6056 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 00:41:48.428000 audit[6056]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdeb6c9650 a2=3 a3=0 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:48.439538 systemd[1]: Started session-25.scope. May 17 00:41:48.439870 systemd-logind[1293]: New session 25 of user core. May 17 00:41:48.446227 kernel: audit: type=1300 audit(1747442508.428:596): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdeb6c9650 a2=3 a3=0 items=0 ppid=1 pid=6056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:48.446516 kernel: audit: type=1327 audit(1747442508.428:596): proctitle=737368643A20636F7265205B707269765D May 17 00:41:48.428000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:48.469000 audit[6056]: USER_START pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.481707 kernel: audit: type=1105 audit(1747442508.469:597): pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.480000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.495310 kernel: audit: type=1103 audit(1747442508.480:598): pid=6059 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.691357 sshd[6056]: pam_unix(sshd:session): session closed for user core May 17 00:41:48.698000 audit[6056]: USER_END pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.708844 systemd[1]: sshd@24-10.0.0.136:22-10.0.0.1:34962.service: Deactivated successfully. May 17 00:41:48.709967 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:41:48.713998 kernel: audit: type=1106 audit(1747442508.698:599): pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.699000 audit[6056]: CRED_DISP pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.721157 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. May 17 00:41:48.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.136:22-10.0.0.1:34962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:48.732184 kernel: audit: type=1104 audit(1747442508.699:600): pid=6056 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:48.739645 systemd-logind[1293]: Removed session 25. May 17 00:41:51.497194 kubelet[2118]: E0517 00:41:51.497149 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-6mszs" podUID="4c9eee5c-cc38-4032-8b07-e8d97094a990" May 17 00:41:53.716152 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:53.716340 kernel: audit: type=1130 audit(1747442513.701:602): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.136:22-10.0.0.1:41610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:53.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.136:22-10.0.0.1:41610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:53.703395 systemd[1]: Started sshd@25-10.0.0.136:22-10.0.0.1:41610.service. May 17 00:41:53.803922 kernel: audit: type=1101 audit(1747442513.790:603): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.804084 kernel: audit: type=1103 audit(1747442513.795:604): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.790000 audit[6094]: USER_ACCT pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.795000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.796874 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:53.804677 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:53.807627 systemd[1]: Started session-26.scope. May 17 00:41:53.811234 kernel: audit: type=1006 audit(1747442513.795:605): pid=6094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 17 00:41:53.808178 systemd-logind[1293]: New session 26 of user core. May 17 00:41:53.795000 audit[6094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe98182e0 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:53.820796 kernel: audit: type=1300 audit(1747442513.795:605): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe98182e0 a2=3 a3=0 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:53.795000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:53.826131 kernel: audit: type=1327 audit(1747442513.795:605): proctitle=737368643A20636F7265205B707269765D May 17 00:41:53.817000 audit[6094]: USER_START pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.833943 kernel: audit: type=1105 audit(1747442513.817:606): pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.817000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:53.846605 kernel: audit: type=1103 audit(1747442513.817:607): pid=6097 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:54.054391 sshd[6094]: pam_unix(sshd:session): session closed for user core May 17 00:41:54.051000 audit[6094]: USER_END pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:54.065135 kernel: audit: type=1106 audit(1747442514.051:608): pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:54.063000 audit[6094]: CRED_DISP pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:54.070143 kernel: audit: type=1104 audit(1747442514.063:609): pid=6094 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:54.067281 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. May 17 00:41:54.068810 systemd[1]: sshd@25-10.0.0.136:22-10.0.0.1:41610.service: Deactivated successfully. May 17 00:41:54.069771 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:41:54.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.136:22-10.0.0.1:41610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:54.074981 systemd-logind[1293]: Removed session 26. May 17 00:41:55.452635 kubelet[2118]: I0517 00:41:55.452594 2118 scope.go:117] "RemoveContainer" containerID="57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01" May 17 00:41:55.458146 env[1303]: time="2025-05-17T00:41:55.458073210Z" level=info msg="RemoveContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\"" May 17 00:41:55.473960 env[1303]: time="2025-05-17T00:41:55.473894004Z" level=info msg="RemoveContainer for \"57273c95175d80d5071759be7b2dcf961f3972d916ae45636b8407e164159f01\" returns successfully" May 17 00:41:55.477376 env[1303]: time="2025-05-17T00:41:55.475454540Z" level=info msg="StopPodSandbox for \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\"" May 17 00:41:55.485091 kubelet[2118]: E0517 00:41:55.485036 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.595 [WARNING][6119] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.596 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.596 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" iface="eth0" netns="" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.596 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.596 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.661 [INFO][6126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.661 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.661 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.683 [WARNING][6126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.683 [INFO][6126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.693 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:55.743363 env[1303]: 2025-05-17 00:41:55.699 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.743363 env[1303]: time="2025-05-17T00:41:55.734184409Z" level=info msg="TearDown network for sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" successfully" May 17 00:41:55.743363 env[1303]: time="2025-05-17T00:41:55.741719543Z" level=info msg="StopPodSandbox for \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" returns successfully" May 17 00:41:55.758057 env[1303]: time="2025-05-17T00:41:55.757573960Z" level=info msg="RemovePodSandbox for \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\"" May 17 00:41:55.758323 env[1303]: time="2025-05-17T00:41:55.758070978Z" level=info msg="Forcibly stopping sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\"" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.837 [WARNING][6145] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.837 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.837 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" iface="eth0" netns="" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.837 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.837 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.907 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.907 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.907 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.923 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.923 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" HandleID="k8s-pod-network.77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" Workload="localhost-k8s-calico--apiserver--5d67ffffc--lwqn5-eth0" May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.927 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:55.981504 env[1303]: 2025-05-17 00:41:55.975 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863" May 17 00:41:55.983203 env[1303]: time="2025-05-17T00:41:55.983157759Z" level=info msg="TearDown network for sandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" successfully" May 17 00:41:56.006650 env[1303]: time="2025-05-17T00:41:56.006511312Z" level=info msg="RemovePodSandbox \"77907546a32965e8a38785aa8e6e9f8b15a8185dcf9b175eb43f0bdcb1c8b863\" returns successfully" May 17 00:41:56.007488 env[1303]: time="2025-05-17T00:41:56.007438542Z" level=info msg="StopPodSandbox for \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\"" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.092 [WARNING][6171] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.092 [INFO][6171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.092 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" iface="eth0" netns="" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.092 [INFO][6171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.092 [INFO][6171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.197 [INFO][6179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.197 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.198 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.207 [WARNING][6179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.207 [INFO][6179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.217 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:56.221359 env[1303]: 2025-05-17 00:41:56.219 [INFO][6171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.221918 env[1303]: time="2025-05-17T00:41:56.221397870Z" level=info msg="TearDown network for sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" successfully" May 17 00:41:56.221918 env[1303]: time="2025-05-17T00:41:56.221440571Z" level=info msg="StopPodSandbox for \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" returns successfully" May 17 00:41:56.222571 env[1303]: time="2025-05-17T00:41:56.222547000Z" level=info msg="RemovePodSandbox for \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\"" May 17 00:41:56.222786 env[1303]: time="2025-05-17T00:41:56.222745774Z" level=info msg="Forcibly stopping sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\"" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.305 [WARNING][6195] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.305 [INFO][6195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.305 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" iface="eth0" netns="" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.305 [INFO][6195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.305 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.356 [INFO][6204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.356 [INFO][6204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.356 [INFO][6204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.368 [WARNING][6204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.372 [INFO][6204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" HandleID="k8s-pod-network.bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" Workload="localhost-k8s-calico--apiserver--5d67ffffc--hkhhx-eth0" May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.375 [INFO][6204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:41:56.402432 env[1303]: 2025-05-17 00:41:56.386 [INFO][6195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d" May 17 00:41:56.403135 env[1303]: time="2025-05-17T00:41:56.403072830Z" level=info msg="TearDown network for sandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" successfully" May 17 00:41:56.411089 env[1303]: time="2025-05-17T00:41:56.410713944Z" level=info msg="RemovePodSandbox \"bd9a3b18507a26d5d0cdf8522ea54aab02c8e6a0eed10cd2b55664d5005df91d\" returns successfully" May 17 00:41:57.486783 kubelet[2118]: E0517 00:41:57.486747 2118 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-bc8568d5-d7fx5" podUID="89fcd0a8-8017-46e2-b5fb-22df060c0c43" May 17 00:41:59.057157 systemd[1]: Started sshd@26-10.0.0.136:22-10.0.0.1:41624.service. May 17 00:41:59.064145 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:41:59.064312 kernel: audit: type=1130 audit(1747442519.055:611): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.136:22-10.0.0.1:41624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.136:22-10.0.0.1:41624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.193319 kernel: audit: type=1101 audit(1747442519.166:612): pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.193472 kernel: audit: type=1103 audit(1747442519.177:613): pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.166000 audit[6216]: USER_ACCT pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.177000 audit[6216]: CRED_ACQ pid=6216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.180240 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:59.198101 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 41624 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:59.214696 kernel: audit: type=1006 audit(1747442519.177:614): pid=6216 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 May 17 00:41:59.214848 kernel: audit: type=1300 audit(1747442519.177:614): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd37bd8b30 a2=3 a3=0 items=0 ppid=1 pid=6216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:59.177000 audit[6216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd37bd8b30 a2=3 a3=0 items=0 ppid=1 pid=6216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:59.210736 systemd[1]: Started session-27.scope. May 17 00:41:59.212801 systemd-logind[1293]: New session 27 of user core. May 17 00:41:59.177000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:41:59.225218 kernel: audit: type=1327 audit(1747442519.177:614): proctitle=737368643A20636F7265205B707269765D May 17 00:41:59.229000 audit[6216]: USER_START pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.235000 audit[6219]: CRED_ACQ pid=6219 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.259155 kernel: audit: type=1105 audit(1747442519.229:615): pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.259312 kernel: audit: type=1103 audit(1747442519.235:616): pid=6219 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.470652 sshd[6216]: pam_unix(sshd:session): session closed for user core May 17 00:41:59.470000 audit[6216]: USER_END pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.473441 systemd[1]: sshd@26-10.0.0.136:22-10.0.0.1:41624.service: Deactivated successfully. May 17 00:41:59.474572 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:41:59.470000 audit[6216]: CRED_DISP pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.481207 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. May 17 00:41:59.482034 kernel: audit: type=1106 audit(1747442519.470:617): pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.482145 kernel: audit: type=1104 audit(1747442519.470:618): pid=6216 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:41:59.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.136:22-10.0.0.1:41624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.482640 systemd-logind[1293]: Removed session 27.